00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1750 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3011 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.056 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.092 Fetching changes from the remote Git repository 00:00:00.094 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.196 > git --version # 'git version 2.39.2' 00:00:00.196 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.200 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.200 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.093 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.104 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.115 Checking out Revision 6201031def5bfb7f90a861bc162998684798607e (FETCH_HEAD) 00:00:04.115 > git config core.sparsecheckout # timeout=10 00:00:04.126 > git read-tree -mu HEAD # timeout=10 00:00:04.144 > git checkout -f 6201031def5bfb7f90a861bc162998684798607e # timeout=5 00:00:04.160 Commit message: "scripts/kid: Add issue 3354" 00:00:04.160 > git rev-list --no-walk 6201031def5bfb7f90a861bc162998684798607e # timeout=10 00:00:04.260 [Pipeline] Start of Pipeline 00:00:04.274 [Pipeline] library 00:00:04.276 Loading library shm_lib@master 00:00:04.276 Library shm_lib@master is cached. Copying from home. 00:00:04.292 [Pipeline] node 00:00:04.318 Running on WFP45 in /var/jenkins/workspace/nvme-phy-autotest 00:00:04.319 [Pipeline] { 00:00:04.327 [Pipeline] catchError 00:00:04.328 [Pipeline] { 00:00:04.337 [Pipeline] wrap 00:00:04.344 [Pipeline] { 00:00:04.352 [Pipeline] stage 00:00:04.353 [Pipeline] { (Prologue) 00:00:04.514 [Pipeline] sh 00:00:04.795 + logger -p user.info -t JENKINS-CI 00:00:04.815 [Pipeline] echo 00:00:04.817 Node: WFP45 00:00:04.826 [Pipeline] sh 00:00:05.127 [Pipeline] setCustomBuildProperty 00:00:05.136 [Pipeline] echo 00:00:05.137 Cleanup processes 00:00:05.141 [Pipeline] sh 00:00:05.420 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:05.678 1943553 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:05.690 [Pipeline] sh 00:00:05.971 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:05.971 ++ grep -v 'sudo pgrep' 00:00:05.971 ++ awk '{print $1}' 00:00:05.971 + sudo kill -9 00:00:05.971 + true 00:00:05.988 [Pipeline] cleanWs 00:00:05.999 [WS-CLEANUP] Deleting project workspace... 00:00:05.999 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.005 [WS-CLEANUP] done 00:00:06.008 [Pipeline] setCustomBuildProperty 00:00:06.018 [Pipeline] sh 00:00:06.297 + sudo git config --global --replace-all safe.directory '*' 00:00:06.356 [Pipeline] nodesByLabel 00:00:06.357 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.366 [Pipeline] httpRequest 00:00:06.370 HttpMethod: GET 00:00:06.370 URL: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:06.374 Sending request to url: http://10.211.164.96/packages/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:06.386 Response Code: HTTP/1.1 200 OK 00:00:06.387 Success: Status code 200 is in the accepted range: 200,404 00:00:06.387 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:09.640 [Pipeline] sh 00:00:09.923 + tar --no-same-owner -xf jbp_6201031def5bfb7f90a861bc162998684798607e.tar.gz 00:00:09.940 [Pipeline] httpRequest 00:00:09.945 HttpMethod: GET 00:00:09.945 URL: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:09.946 Sending request to url: http://10.211.164.96/packages/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:09.949 Response Code: HTTP/1.1 200 OK 00:00:09.949 Success: Status code 200 is in the accepted range: 200,404 00:00:09.950 Saving response body to /var/jenkins/workspace/nvme-phy-autotest/spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:28.814 [Pipeline] sh 00:00:29.103 + tar --no-same-owner -xf spdk_36faa8c312bf9059b86e0f503d7fd6b43c1498e6.tar.gz 00:00:33.308 [Pipeline] sh 00:00:33.626 + git -C spdk log --oneline -n5 00:00:33.626 36faa8c31 bdev/nvme: Fix the case that namespace was removed during reset 00:00:33.626 e2cb5a5ee bdev/nvme: Factor out nvme_ns active/inactive check into a helper function 00:00:33.626 4b134b4ab bdev/nvme: Delay callbacks when the next operation is a failover 00:00:33.626 d2ea4ecb1 llvm/vfio: Suppress checking leaks for `spdk_nvme_ctrlr_alloc_io_qpair` 00:00:33.626 3b33f4333 test/nvme/cuse: Fix typo 00:00:33.647 [Pipeline] } 00:00:33.662 [Pipeline] // stage 00:00:33.669 [Pipeline] stage 00:00:33.670 [Pipeline] { (Prepare) 00:00:33.687 [Pipeline] writeFile 00:00:33.702 [Pipeline] sh 00:00:33.985 + logger -p user.info -t JENKINS-CI 00:00:33.998 [Pipeline] sh 00:00:34.281 + logger -p user.info -t JENKINS-CI 00:00:34.294 [Pipeline] sh 00:00:34.578 + cat autorun-spdk.conf 00:00:34.579 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.579 SPDK_TEST_IOAT=1 00:00:34.579 SPDK_TEST_NVME=1 00:00:34.579 SPDK_TEST_NVME_CLI=1 00:00:34.579 SPDK_TEST_OCF=1 00:00:34.579 SPDK_RUN_UBSAN=1 00:00:34.579 SPDK_TEST_NVME_CUSE=1 00:00:34.579 SPDK_TEST_SCHEDULER=1 00:00:34.586 RUN_NIGHTLY=1 00:00:34.592 [Pipeline] readFile 00:00:34.617 [Pipeline] withEnv 00:00:34.619 [Pipeline] { 00:00:34.633 [Pipeline] sh 00:00:34.918 + set -ex 00:00:34.918 + [[ -f /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf ]] 00:00:34.918 + source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:00:34.918 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:34.918 ++ SPDK_TEST_IOAT=1 00:00:34.918 ++ SPDK_TEST_NVME=1 00:00:34.918 ++ SPDK_TEST_NVME_CLI=1 00:00:34.919 ++ SPDK_TEST_OCF=1 00:00:34.919 ++ SPDK_RUN_UBSAN=1 00:00:34.919 ++ SPDK_TEST_NVME_CUSE=1 00:00:34.919 ++ SPDK_TEST_SCHEDULER=1 00:00:34.919 ++ RUN_NIGHTLY=1 00:00:34.919 + case $SPDK_TEST_NVMF_NICS in 00:00:34.919 + DRIVERS= 00:00:34.919 + [[ -n '' ]] 00:00:34.919 + exit 0 00:00:34.928 [Pipeline] } 00:00:34.946 [Pipeline] // withEnv 00:00:34.952 [Pipeline] } 00:00:34.968 [Pipeline] // stage 00:00:34.978 [Pipeline] catchError 00:00:34.980 [Pipeline] { 00:00:34.995 [Pipeline] timeout 00:00:34.995 Timeout set to expire in 30 min 00:00:34.997 [Pipeline] { 00:00:35.012 [Pipeline] stage 00:00:35.014 [Pipeline] { (Tests) 00:00:35.030 [Pipeline] sh 00:00:35.320 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvme-phy-autotest 00:00:35.320 ++ readlink -f /var/jenkins/workspace/nvme-phy-autotest 00:00:35.320 + DIR_ROOT=/var/jenkins/workspace/nvme-phy-autotest 00:00:35.320 + [[ -n /var/jenkins/workspace/nvme-phy-autotest ]] 00:00:35.320 + DIR_SPDK=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:35.320 + DIR_OUTPUT=/var/jenkins/workspace/nvme-phy-autotest/output 00:00:35.320 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/spdk ]] 00:00:35.320 + [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:00:35.320 + mkdir -p /var/jenkins/workspace/nvme-phy-autotest/output 00:00:35.320 + [[ -d /var/jenkins/workspace/nvme-phy-autotest/output ]] 00:00:35.320 + cd /var/jenkins/workspace/nvme-phy-autotest 00:00:35.320 + source /etc/os-release 00:00:35.320 ++ NAME='Fedora Linux' 00:00:35.320 ++ VERSION='38 (Cloud Edition)' 00:00:35.320 ++ ID=fedora 00:00:35.320 ++ VERSION_ID=38 00:00:35.320 ++ VERSION_CODENAME= 00:00:35.320 ++ PLATFORM_ID=platform:f38 00:00:35.320 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:35.320 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:35.320 ++ LOGO=fedora-logo-icon 00:00:35.320 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:35.320 ++ HOME_URL=https://fedoraproject.org/ 00:00:35.320 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:35.320 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:35.320 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:35.320 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:35.320 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:35.320 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:35.320 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:35.320 ++ SUPPORT_END=2024-05-14 00:00:35.320 ++ VARIANT='Cloud Edition' 00:00:35.320 ++ VARIANT_ID=cloud 00:00:35.320 + uname -a 00:00:35.320 Linux spdk-wfp-45 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:35.320 + sudo /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:00:38.614 Hugepages 00:00:38.614 node hugesize free / total 00:00:38.614 node0 1048576kB 0 / 0 00:00:38.614 node0 2048kB 0 / 0 00:00:38.614 node1 1048576kB 0 / 0 00:00:38.614 node1 2048kB 0 / 0 00:00:38.614 00:00:38.614 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:38.614 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:38.614 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:38.614 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:38.614 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:38.614 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:38.614 + rm -f /tmp/spdk-ld-path 00:00:38.614 + source autorun-spdk.conf 00:00:38.614 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.614 ++ SPDK_TEST_IOAT=1 00:00:38.614 ++ SPDK_TEST_NVME=1 00:00:38.614 ++ SPDK_TEST_NVME_CLI=1 00:00:38.614 ++ SPDK_TEST_OCF=1 00:00:38.614 ++ SPDK_RUN_UBSAN=1 00:00:38.614 ++ SPDK_TEST_NVME_CUSE=1 00:00:38.614 ++ SPDK_TEST_SCHEDULER=1 00:00:38.614 ++ RUN_NIGHTLY=1 00:00:38.614 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:38.614 + [[ -n '' ]] 00:00:38.614 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:38.614 + for M in /var/spdk/build-*-manifest.txt 00:00:38.614 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:38.614 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:00:38.614 + for M in /var/spdk/build-*-manifest.txt 00:00:38.614 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:38.614 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/ 00:00:38.614 ++ uname 00:00:38.614 + [[ Linux == \L\i\n\u\x ]] 00:00:38.614 + sudo dmesg -T 00:00:38.614 + sudo dmesg --clear 00:00:38.614 + dmesg_pid=1944437 00:00:38.614 + [[ Fedora Linux == FreeBSD ]] 00:00:38.614 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.614 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:38.614 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:38.614 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:38.614 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:38.614 + [[ -x /usr/src/fio-static/fio ]] 00:00:38.614 + export FIO_BIN=/usr/src/fio-static/fio 00:00:38.614 + FIO_BIN=/usr/src/fio-static/fio 00:00:38.614 + sudo dmesg -Tw 00:00:38.614 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\e\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:38.614 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:38.614 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:38.614 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.614 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:38.614 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:38.614 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.614 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:38.614 + spdk/autorun.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf 00:00:38.614 Test configuration: 00:00:38.614 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.614 SPDK_TEST_IOAT=1 00:00:38.614 SPDK_TEST_NVME=1 00:00:38.614 SPDK_TEST_NVME_CLI=1 00:00:38.614 SPDK_TEST_OCF=1 00:00:38.614 SPDK_RUN_UBSAN=1 00:00:38.614 SPDK_TEST_NVME_CUSE=1 00:00:38.614 SPDK_TEST_SCHEDULER=1 00:00:38.614 RUN_NIGHTLY=1 19:55:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:00:38.614 19:55:36 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:38.614 19:55:36 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:38.614 19:55:36 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:38.614 19:55:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.614 19:55:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.614 19:55:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.614 19:55:36 -- paths/export.sh@5 -- $ export PATH 00:00:38.614 19:55:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:38.614 19:55:36 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:00:38.614 19:55:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:38.614 19:55:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714067736.XXXXXX 00:00:38.614 19:55:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714067736.hIFAWQ 00:00:38.614 19:55:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:38.614 19:55:36 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:38.614 19:55:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/' 00:00:38.614 19:55:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:38.614 19:55:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:38.614 19:55:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:38.614 19:55:36 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:38.614 19:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.614 19:55:36 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk' 00:00:38.614 19:55:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:38.614 19:55:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:38.614 19:55:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:00:38.614 19:55:36 -- spdk/autobuild.sh@16 -- $ date -u 00:00:38.614 Thu Apr 25 05:55:36 PM UTC 2024 00:00:38.614 19:55:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:38.614 LTS-24-g36faa8c31 00:00:38.614 19:55:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:38.614 19:55:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:38.614 19:55:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:38.614 19:55:36 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:38.614 19:55:36 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:38.614 19:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.614 ************************************ 00:00:38.614 START TEST ubsan 00:00:38.614 ************************************ 00:00:38.614 19:55:36 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:38.614 using ubsan 00:00:38.614 00:00:38.614 real 0m0.000s 00:00:38.614 user 0m0.000s 00:00:38.614 sys 0m0.000s 00:00:38.614 19:55:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:38.614 19:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.614 ************************************ 00:00:38.614 END TEST ubsan 00:00:38.614 ************************************ 00:00:38.874 19:55:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.874 19:55:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.874 19:55:36 -- spdk/autobuild.sh@47 -- $ [[ 1 -eq 1 ]] 00:00:38.874 19:55:36 -- spdk/autobuild.sh@48 -- $ ocf_precompile 00:00:38.874 19:55:36 -- common/autobuild_common.sh@419 -- $ run_test autobuild_ocf_precompile _ocf_precompile 00:00:38.874 19:55:36 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:00:38.874 19:55:36 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:38.874 19:55:36 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.874 ************************************ 00:00:38.874 START TEST autobuild_ocf_precompile 00:00:38.874 ************************************ 00:00:38.874 19:55:36 -- common/autotest_common.sh@1104 -- $ _ocf_precompile 00:00:38.874 19:55:36 -- common/autobuild_common.sh@21 -- $ echo --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk 00:00:38.874 19:55:36 -- common/autobuild_common.sh@21 -- $ sed s/--enable-coverage//g 00:00:38.874 19:55:36 -- common/autobuild_common.sh@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --with-ublk 00:00:38.874 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:00:38.874 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:00:39.133 Using 'verbs' RDMA provider 00:00:51.912 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:06.825 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:06.825 Creating mk/config.mk...done. 00:01:06.825 Creating mk/cc.flags.mk...done. 00:01:06.825 Type 'make' to build. 00:01:06.825 19:56:02 -- common/autobuild_common.sh@22 -- $ make -j72 include/spdk/config.h 00:01:06.825 19:56:02 -- common/autobuild_common.sh@23 -- $ CC=gcc 00:01:06.825 19:56:02 -- common/autobuild_common.sh@23 -- $ CCAR=ar 00:01:06.825 19:56:02 -- common/autobuild_common.sh@23 -- $ make -j72 -C /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf exportlib O=/var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:01:06.825 make: Entering directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/alru.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/acp.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cache.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cfg.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cleaner.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_composite_volume.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_core.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_ctx.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_err.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_debug.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_def.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io_class.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_logger.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_mngt.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_metadata.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_queue.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_stats.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_types.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_volume.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/promotion/nhit.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp_structs.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru_structs.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_ops.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_priv.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop_structs.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.c 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.h 00:01:06.825 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_debug.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_bit.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cache_line.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_common.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_internal.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition_structs.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment_id.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_status.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_structs.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_cache.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_flush.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_misc.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_io_class.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_def_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_class.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru_structs.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_metadata.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_builder.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume_priv.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.c 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.h 00:01:06.826 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_structs.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/ops.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io_allocator.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.c 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_stats.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.h 00:01:06.827 INSTALL /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.c 00:01:06.827 CC env_ocf/mpool.o 00:01:06.827 CC env_ocf/ocf_env.o 00:01:06.827 CC env_ocf/src/ocf/cleaning/alru.o 00:01:06.827 CC env_ocf/src/ocf/cleaning/acp.o 00:01:06.827 CC env_ocf/src/ocf/cleaning/cleaning.o 00:01:06.827 CC env_ocf/src/ocf/cleaning/nop.o 00:01:06.827 CC env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.o 00:01:06.827 CC env_ocf/src/ocf/concurrency/ocf_concurrency.o 00:01:06.827 CC env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.o 00:01:06.827 CC env_ocf/src/ocf/concurrency/ocf_mio_concurrency.o 00:01:06.827 CC env_ocf/src/ocf/concurrency/ocf_pio_concurrency.o 00:01:06.827 CC env_ocf/src/ocf/engine/cache_engine.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_bf.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_common.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_d2c.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_discard.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_inv.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_ops.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_fast.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_rd.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_pt.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_wb.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_wa.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_wi.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_wo.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_zero.o 00:01:06.827 CC env_ocf/src/ocf/engine/engine_wt.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_collision.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_cleaning_policy.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_core.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_eviction_policy.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_io.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_misc.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_partition.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_passive_update.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_raw_atomic.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_raw_dynamic.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_raw_volatile.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_raw.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_superblock.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_common.o 00:01:06.827 CC env_ocf/src/ocf/metadata/metadata_segment.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_cache.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_core_pool.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_core.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_flush.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_io_class.o 00:01:06.827 CC env_ocf/src/ocf/ocf_composite_volume.o 00:01:06.827 CC env_ocf/src/ocf/ocf_cache.o 00:01:06.827 CC env_ocf/src/ocf/mngt/ocf_mngt_misc.o 00:01:06.827 CC env_ocf/src/ocf/ocf_core.o 00:01:06.827 CC env_ocf/src/ocf/ocf_ctx.o 00:01:06.827 CC env_ocf/src/ocf/ocf_logger.o 00:01:06.827 CC env_ocf/src/ocf/ocf_io.o 00:01:06.827 CC env_ocf/src/ocf/ocf_lru.o 00:01:06.827 CC env_ocf/src/ocf/ocf_metadata.o 00:01:06.827 CC env_ocf/src/ocf/ocf_io_class.o 00:01:06.827 CC env_ocf/src/ocf/ocf_request.o 00:01:06.827 CC env_ocf/src/ocf/ocf_queue.o 00:01:06.827 CC env_ocf/src/ocf/ocf_space.o 00:01:06.827 CC env_ocf/src/ocf/ocf_seq_cutoff.o 00:01:06.827 CC env_ocf/src/ocf/ocf_stats_builder.o 00:01:06.827 CC env_ocf/src/ocf/ocf_stats.o 00:01:06.827 CC env_ocf/src/ocf/promotion/nhit/nhit.o 00:01:06.827 CC env_ocf/src/ocf/promotion/nhit/nhit_hash.o 00:01:06.827 CC env_ocf/src/ocf/promotion/promotion.o 00:01:06.827 CC env_ocf/src/ocf/ocf_volume.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_alock.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_async_lock.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_cache_line.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_cleaner.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_generator.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_io.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_list.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_parallelize.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_pipeline.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_rbtree.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_realloc.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_refcnt.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_request.o 00:01:06.827 CC env_ocf/src/ocf/utils/utils_user_part.o 00:01:07.087 LIB libspdk_ocfenv.a 00:01:07.347 cp /var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a /var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:01:07.347 make: Leaving directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf' 00:01:07.347 19:56:05 -- common/autobuild_common.sh@25 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a' 00:01:07.347 19:56:05 -- common/autobuild_common.sh@27 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a 00:01:07.606 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:01:07.606 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:01:07.882 Using 'verbs' RDMA provider 00:01:20.673 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:32.889 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:32.889 Creating mk/config.mk...done. 00:01:32.889 Creating mk/cc.flags.mk...done. 00:01:32.889 Type 'make' to build. 00:01:32.889 00:01:32.889 real 0m52.840s 00:01:32.889 user 0m51.708s 00:01:32.889 sys 0m37.862s 00:01:32.889 19:56:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:32.889 19:56:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.889 ************************************ 00:01:32.889 END TEST autobuild_ocf_precompile 00:01:32.889 ************************************ 00:01:32.889 19:56:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:32.889 19:56:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.889 19:56:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.889 19:56:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.889 19:56:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:32.889 19:56:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a --with-shared 00:01:32.889 Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk 00:01:32.889 Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:01:32.889 Using 'verbs' RDMA provider 00:01:45.102 Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:55.099 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:55.099 Creating mk/config.mk...done. 00:01:55.099 Creating mk/cc.flags.mk...done. 00:01:55.099 Type 'make' to build. 00:01:55.099 19:56:52 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:55.099 19:56:52 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:55.099 19:56:52 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:55.099 19:56:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.099 ************************************ 00:01:55.099 START TEST make 00:01:55.099 ************************************ 00:01:55.099 19:56:52 -- common/autotest_common.sh@1104 -- $ make -j72 00:01:55.099 make[1]: Nothing to be done for 'all'. 00:02:05.150 The Meson build system 00:02:05.150 Version: 1.3.1 00:02:05.150 Source dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk 00:02:05.150 Build dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp 00:02:05.150 Build type: native build 00:02:05.150 Program cat found: YES (/usr/bin/cat) 00:02:05.150 Project name: DPDK 00:02:05.150 Project version: 23.11.0 00:02:05.150 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:05.150 C linker for the host machine: cc ld.bfd 2.39-16 00:02:05.150 Host machine cpu family: x86_64 00:02:05.150 Host machine cpu: x86_64 00:02:05.150 Message: ## Building in Developer Mode ## 00:02:05.150 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:05.150 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:05.150 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:05.150 Program python3 found: YES (/usr/bin/python3) 00:02:05.150 Program cat found: YES (/usr/bin/cat) 00:02:05.150 Compiler for C supports arguments -march=native: YES 00:02:05.150 Checking for size of "void *" : 8 00:02:05.150 Checking for size of "void *" : 8 (cached) 00:02:05.150 Library m found: YES 00:02:05.150 Library numa found: YES 00:02:05.150 Has header "numaif.h" : YES 00:02:05.150 Library fdt found: NO 00:02:05.150 Library execinfo found: NO 00:02:05.150 Has header "execinfo.h" : YES 00:02:05.150 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:05.150 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:05.150 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:05.150 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:05.150 Run-time dependency openssl found: YES 3.0.9 00:02:05.150 Run-time dependency libpcap found: YES 1.10.4 00:02:05.150 Has header "pcap.h" with dependency libpcap: YES 00:02:05.150 Compiler for C supports arguments -Wcast-qual: YES 00:02:05.150 Compiler for C supports arguments -Wdeprecated: YES 00:02:05.150 Compiler for C supports arguments -Wformat: YES 00:02:05.150 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:05.150 Compiler for C supports arguments -Wformat-security: NO 00:02:05.150 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.150 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:05.150 Compiler for C supports arguments -Wnested-externs: YES 00:02:05.150 Compiler for C supports arguments -Wold-style-definition: YES 00:02:05.150 Compiler for C supports arguments -Wpointer-arith: YES 00:02:05.150 Compiler for C supports arguments -Wsign-compare: YES 00:02:05.150 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:05.150 Compiler for C supports arguments -Wundef: YES 00:02:05.150 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.150 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:05.150 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:05.150 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.151 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:05.151 Program objdump found: YES (/usr/bin/objdump) 00:02:05.151 Compiler for C supports arguments -mavx512f: YES 00:02:05.151 Checking if "AVX512 checking" compiles: YES 00:02:05.151 Fetching value of define "__SSE4_2__" : 1 00:02:05.151 Fetching value of define "__AES__" : 1 00:02:05.151 Fetching value of define "__AVX__" : 1 00:02:05.151 Fetching value of define "__AVX2__" : 1 00:02:05.151 Fetching value of define "__AVX512BW__" : 1 00:02:05.151 Fetching value of define "__AVX512CD__" : 1 00:02:05.151 Fetching value of define "__AVX512DQ__" : 1 00:02:05.151 Fetching value of define "__AVX512F__" : 1 00:02:05.151 Fetching value of define "__AVX512VL__" : 1 00:02:05.151 Fetching value of define "__PCLMUL__" : 1 00:02:05.151 Fetching value of define "__RDRND__" : 1 00:02:05.151 Fetching value of define "__RDSEED__" : 1 00:02:05.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:05.151 Fetching value of define "__znver1__" : (undefined) 00:02:05.151 Fetching value of define "__znver2__" : (undefined) 00:02:05.151 Fetching value of define "__znver3__" : (undefined) 00:02:05.151 Fetching value of define "__znver4__" : (undefined) 00:02:05.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:05.151 Message: lib/log: Defining dependency "log" 00:02:05.151 Message: lib/kvargs: Defining dependency "kvargs" 00:02:05.151 Message: lib/telemetry: Defining dependency "telemetry" 00:02:05.151 Checking for function "getentropy" : NO 00:02:05.151 Message: lib/eal: Defining dependency "eal" 00:02:05.151 Message: lib/ring: Defining dependency "ring" 00:02:05.151 Message: lib/rcu: Defining dependency "rcu" 00:02:05.151 Message: lib/mempool: Defining dependency "mempool" 00:02:05.151 Message: lib/mbuf: Defining dependency "mbuf" 00:02:05.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:05.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:05.151 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:05.151 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:05.151 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:05.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:05.151 Compiler for C supports arguments -mpclmul: YES 00:02:05.151 Compiler for C supports arguments -maes: YES 00:02:05.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:05.151 Compiler for C supports arguments -mavx512bw: YES 00:02:05.151 Compiler for C supports arguments -mavx512dq: YES 00:02:05.151 Compiler for C supports arguments -mavx512vl: YES 00:02:05.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:05.151 Compiler for C supports arguments -mavx2: YES 00:02:05.151 Compiler for C supports arguments -mavx: YES 00:02:05.151 Message: lib/net: Defining dependency "net" 00:02:05.151 Message: lib/meter: Defining dependency "meter" 00:02:05.151 Message: lib/ethdev: Defining dependency "ethdev" 00:02:05.151 Message: lib/pci: Defining dependency "pci" 00:02:05.151 Message: lib/cmdline: Defining dependency "cmdline" 00:02:05.151 Message: lib/hash: Defining dependency "hash" 00:02:05.151 Message: lib/timer: Defining dependency "timer" 00:02:05.151 Message: lib/compressdev: Defining dependency "compressdev" 00:02:05.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:05.151 Message: lib/dmadev: Defining dependency "dmadev" 00:02:05.151 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:05.151 Message: lib/power: Defining dependency "power" 00:02:05.151 Message: lib/reorder: Defining dependency "reorder" 00:02:05.151 Message: lib/security: Defining dependency "security" 00:02:05.151 Has header "linux/userfaultfd.h" : YES 00:02:05.151 Has header "linux/vduse.h" : YES 00:02:05.151 Message: lib/vhost: Defining dependency "vhost" 00:02:05.151 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:05.151 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:05.151 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:05.151 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:05.151 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:05.151 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:05.151 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:05.151 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:05.151 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:05.151 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:05.151 Program doxygen found: YES (/usr/bin/doxygen) 00:02:05.151 Configuring doxy-api-html.conf using configuration 00:02:05.151 Configuring doxy-api-man.conf using configuration 00:02:05.151 Program mandb found: YES (/usr/bin/mandb) 00:02:05.151 Program sphinx-build found: NO 00:02:05.151 Configuring rte_build_config.h using configuration 00:02:05.151 Message: 00:02:05.151 ================= 00:02:05.151 Applications Enabled 00:02:05.151 ================= 00:02:05.151 00:02:05.151 apps: 00:02:05.151 00:02:05.151 00:02:05.151 Message: 00:02:05.151 ================= 00:02:05.151 Libraries Enabled 00:02:05.151 ================= 00:02:05.151 00:02:05.151 libs: 00:02:05.151 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:05.151 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:05.151 cryptodev, dmadev, power, reorder, security, vhost, 00:02:05.151 00:02:05.151 Message: 00:02:05.151 =============== 00:02:05.151 Drivers Enabled 00:02:05.151 =============== 00:02:05.151 00:02:05.151 common: 00:02:05.151 00:02:05.151 bus: 00:02:05.151 pci, vdev, 00:02:05.151 mempool: 00:02:05.151 ring, 00:02:05.151 dma: 00:02:05.151 00:02:05.151 net: 00:02:05.151 00:02:05.151 crypto: 00:02:05.151 00:02:05.151 compress: 00:02:05.151 00:02:05.151 vdpa: 00:02:05.151 00:02:05.151 00:02:05.151 Message: 00:02:05.151 ================= 00:02:05.151 Content Skipped 00:02:05.151 ================= 00:02:05.151 00:02:05.151 apps: 00:02:05.151 dumpcap: explicitly disabled via build config 00:02:05.151 graph: explicitly disabled via build config 00:02:05.151 pdump: explicitly disabled via build config 00:02:05.151 proc-info: explicitly disabled via build config 00:02:05.151 test-acl: explicitly disabled via build config 00:02:05.151 test-bbdev: explicitly disabled via build config 00:02:05.151 test-cmdline: explicitly disabled via build config 00:02:05.151 test-compress-perf: explicitly disabled via build config 00:02:05.151 test-crypto-perf: explicitly disabled via build config 00:02:05.151 test-dma-perf: explicitly disabled via build config 00:02:05.151 test-eventdev: explicitly disabled via build config 00:02:05.151 test-fib: explicitly disabled via build config 00:02:05.151 test-flow-perf: explicitly disabled via build config 00:02:05.151 test-gpudev: explicitly disabled via build config 00:02:05.151 test-mldev: explicitly disabled via build config 00:02:05.151 test-pipeline: explicitly disabled via build config 00:02:05.151 test-pmd: explicitly disabled via build config 00:02:05.151 test-regex: explicitly disabled via build config 00:02:05.151 test-sad: explicitly disabled via build config 00:02:05.151 test-security-perf: explicitly disabled via build config 00:02:05.151 00:02:05.151 libs: 00:02:05.151 metrics: explicitly disabled via build config 00:02:05.151 acl: explicitly disabled via build config 00:02:05.151 bbdev: explicitly disabled via build config 00:02:05.151 bitratestats: explicitly disabled via build config 00:02:05.151 bpf: explicitly disabled via build config 00:02:05.151 cfgfile: explicitly disabled via build config 00:02:05.151 distributor: explicitly disabled via build config 00:02:05.151 efd: explicitly disabled via build config 00:02:05.151 eventdev: explicitly disabled via build config 00:02:05.151 dispatcher: explicitly disabled via build config 00:02:05.151 gpudev: explicitly disabled via build config 00:02:05.151 gro: explicitly disabled via build config 00:02:05.151 gso: explicitly disabled via build config 00:02:05.151 ip_frag: explicitly disabled via build config 00:02:05.151 jobstats: explicitly disabled via build config 00:02:05.151 latencystats: explicitly disabled via build config 00:02:05.151 lpm: explicitly disabled via build config 00:02:05.151 member: explicitly disabled via build config 00:02:05.151 pcapng: explicitly disabled via build config 00:02:05.151 rawdev: explicitly disabled via build config 00:02:05.151 regexdev: explicitly disabled via build config 00:02:05.151 mldev: explicitly disabled via build config 00:02:05.151 rib: explicitly disabled via build config 00:02:05.151 sched: explicitly disabled via build config 00:02:05.151 stack: explicitly disabled via build config 00:02:05.151 ipsec: explicitly disabled via build config 00:02:05.151 pdcp: explicitly disabled via build config 00:02:05.151 fib: explicitly disabled via build config 00:02:05.151 port: explicitly disabled via build config 00:02:05.151 pdump: explicitly disabled via build config 00:02:05.151 table: explicitly disabled via build config 00:02:05.151 pipeline: explicitly disabled via build config 00:02:05.151 graph: explicitly disabled via build config 00:02:05.151 node: explicitly disabled via build config 00:02:05.151 00:02:05.151 drivers: 00:02:05.151 common/cpt: not in enabled drivers build config 00:02:05.151 common/dpaax: not in enabled drivers build config 00:02:05.151 common/iavf: not in enabled drivers build config 00:02:05.151 common/idpf: not in enabled drivers build config 00:02:05.151 common/mvep: not in enabled drivers build config 00:02:05.151 common/octeontx: not in enabled drivers build config 00:02:05.151 bus/auxiliary: not in enabled drivers build config 00:02:05.151 bus/cdx: not in enabled drivers build config 00:02:05.151 bus/dpaa: not in enabled drivers build config 00:02:05.151 bus/fslmc: not in enabled drivers build config 00:02:05.151 bus/ifpga: not in enabled drivers build config 00:02:05.151 bus/platform: not in enabled drivers build config 00:02:05.151 bus/vmbus: not in enabled drivers build config 00:02:05.151 common/cnxk: not in enabled drivers build config 00:02:05.151 common/mlx5: not in enabled drivers build config 00:02:05.151 common/nfp: not in enabled drivers build config 00:02:05.151 common/qat: not in enabled drivers build config 00:02:05.151 common/sfc_efx: not in enabled drivers build config 00:02:05.152 mempool/bucket: not in enabled drivers build config 00:02:05.152 mempool/cnxk: not in enabled drivers build config 00:02:05.152 mempool/dpaa: not in enabled drivers build config 00:02:05.152 mempool/dpaa2: not in enabled drivers build config 00:02:05.152 mempool/octeontx: not in enabled drivers build config 00:02:05.152 mempool/stack: not in enabled drivers build config 00:02:05.152 dma/cnxk: not in enabled drivers build config 00:02:05.152 dma/dpaa: not in enabled drivers build config 00:02:05.152 dma/dpaa2: not in enabled drivers build config 00:02:05.152 dma/hisilicon: not in enabled drivers build config 00:02:05.152 dma/idxd: not in enabled drivers build config 00:02:05.152 dma/ioat: not in enabled drivers build config 00:02:05.152 dma/skeleton: not in enabled drivers build config 00:02:05.152 net/af_packet: not in enabled drivers build config 00:02:05.152 net/af_xdp: not in enabled drivers build config 00:02:05.152 net/ark: not in enabled drivers build config 00:02:05.152 net/atlantic: not in enabled drivers build config 00:02:05.152 net/avp: not in enabled drivers build config 00:02:05.152 net/axgbe: not in enabled drivers build config 00:02:05.152 net/bnx2x: not in enabled drivers build config 00:02:05.152 net/bnxt: not in enabled drivers build config 00:02:05.152 net/bonding: not in enabled drivers build config 00:02:05.152 net/cnxk: not in enabled drivers build config 00:02:05.152 net/cpfl: not in enabled drivers build config 00:02:05.152 net/cxgbe: not in enabled drivers build config 00:02:05.152 net/dpaa: not in enabled drivers build config 00:02:05.152 net/dpaa2: not in enabled drivers build config 00:02:05.152 net/e1000: not in enabled drivers build config 00:02:05.152 net/ena: not in enabled drivers build config 00:02:05.152 net/enetc: not in enabled drivers build config 00:02:05.152 net/enetfec: not in enabled drivers build config 00:02:05.152 net/enic: not in enabled drivers build config 00:02:05.152 net/failsafe: not in enabled drivers build config 00:02:05.152 net/fm10k: not in enabled drivers build config 00:02:05.152 net/gve: not in enabled drivers build config 00:02:05.152 net/hinic: not in enabled drivers build config 00:02:05.152 net/hns3: not in enabled drivers build config 00:02:05.152 net/i40e: not in enabled drivers build config 00:02:05.152 net/iavf: not in enabled drivers build config 00:02:05.152 net/ice: not in enabled drivers build config 00:02:05.152 net/idpf: not in enabled drivers build config 00:02:05.152 net/igc: not in enabled drivers build config 00:02:05.152 net/ionic: not in enabled drivers build config 00:02:05.152 net/ipn3ke: not in enabled drivers build config 00:02:05.152 net/ixgbe: not in enabled drivers build config 00:02:05.152 net/mana: not in enabled drivers build config 00:02:05.152 net/memif: not in enabled drivers build config 00:02:05.152 net/mlx4: not in enabled drivers build config 00:02:05.152 net/mlx5: not in enabled drivers build config 00:02:05.152 net/mvneta: not in enabled drivers build config 00:02:05.152 net/mvpp2: not in enabled drivers build config 00:02:05.152 net/netvsc: not in enabled drivers build config 00:02:05.152 net/nfb: not in enabled drivers build config 00:02:05.152 net/nfp: not in enabled drivers build config 00:02:05.152 net/ngbe: not in enabled drivers build config 00:02:05.152 net/null: not in enabled drivers build config 00:02:05.152 net/octeontx: not in enabled drivers build config 00:02:05.152 net/octeon_ep: not in enabled drivers build config 00:02:05.152 net/pcap: not in enabled drivers build config 00:02:05.152 net/pfe: not in enabled drivers build config 00:02:05.152 net/qede: not in enabled drivers build config 00:02:05.152 net/ring: not in enabled drivers build config 00:02:05.152 net/sfc: not in enabled drivers build config 00:02:05.152 net/softnic: not in enabled drivers build config 00:02:05.152 net/tap: not in enabled drivers build config 00:02:05.152 net/thunderx: not in enabled drivers build config 00:02:05.152 net/txgbe: not in enabled drivers build config 00:02:05.152 net/vdev_netvsc: not in enabled drivers build config 00:02:05.152 net/vhost: not in enabled drivers build config 00:02:05.152 net/virtio: not in enabled drivers build config 00:02:05.152 net/vmxnet3: not in enabled drivers build config 00:02:05.152 raw/*: missing internal dependency, "rawdev" 00:02:05.152 crypto/armv8: not in enabled drivers build config 00:02:05.152 crypto/bcmfs: not in enabled drivers build config 00:02:05.152 crypto/caam_jr: not in enabled drivers build config 00:02:05.152 crypto/ccp: not in enabled drivers build config 00:02:05.152 crypto/cnxk: not in enabled drivers build config 00:02:05.152 crypto/dpaa_sec: not in enabled drivers build config 00:02:05.152 crypto/dpaa2_sec: not in enabled drivers build config 00:02:05.152 crypto/ipsec_mb: not in enabled drivers build config 00:02:05.152 crypto/mlx5: not in enabled drivers build config 00:02:05.152 crypto/mvsam: not in enabled drivers build config 00:02:05.152 crypto/nitrox: not in enabled drivers build config 00:02:05.152 crypto/null: not in enabled drivers build config 00:02:05.152 crypto/octeontx: not in enabled drivers build config 00:02:05.152 crypto/openssl: not in enabled drivers build config 00:02:05.152 crypto/scheduler: not in enabled drivers build config 00:02:05.152 crypto/uadk: not in enabled drivers build config 00:02:05.152 crypto/virtio: not in enabled drivers build config 00:02:05.152 compress/isal: not in enabled drivers build config 00:02:05.152 compress/mlx5: not in enabled drivers build config 00:02:05.152 compress/octeontx: not in enabled drivers build config 00:02:05.152 compress/zlib: not in enabled drivers build config 00:02:05.152 regex/*: missing internal dependency, "regexdev" 00:02:05.152 ml/*: missing internal dependency, "mldev" 00:02:05.152 vdpa/ifc: not in enabled drivers build config 00:02:05.152 vdpa/mlx5: not in enabled drivers build config 00:02:05.152 vdpa/nfp: not in enabled drivers build config 00:02:05.152 vdpa/sfc: not in enabled drivers build config 00:02:05.152 event/*: missing internal dependency, "eventdev" 00:02:05.152 baseband/*: missing internal dependency, "bbdev" 00:02:05.152 gpu/*: missing internal dependency, "gpudev" 00:02:05.152 00:02:05.152 00:02:05.152 Build targets in project: 85 00:02:05.152 00:02:05.152 DPDK 23.11.0 00:02:05.152 00:02:05.152 User defined options 00:02:05.152 buildtype : debug 00:02:05.152 default_library : shared 00:02:05.152 libdir : lib 00:02:05.152 prefix : /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build 00:02:05.152 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:05.152 c_link_args : 00:02:05.152 cpu_instruction_set: native 00:02:05.152 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:05.152 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:05.152 enable_docs : false 00:02:05.152 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:05.152 enable_kmods : false 00:02:05.152 tests : false 00:02:05.152 00:02:05.152 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.152 ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp' 00:02:05.416 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:05.416 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:05.416 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:05.416 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:05.416 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:05.416 [6/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:05.416 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:05.416 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:05.416 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:05.416 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:05.416 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:05.416 [12/265] Linking static target lib/librte_kvargs.a 00:02:05.416 [13/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:05.416 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:05.416 [15/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:05.416 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:05.416 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:05.416 [18/265] Linking static target lib/librte_log.a 00:02:05.416 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:05.416 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:05.416 [21/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:05.416 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:05.416 [23/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:05.416 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:05.674 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:05.938 [26/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:05.938 [27/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:05.938 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:05.938 [29/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:05.938 [30/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:05.938 [31/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:05.938 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:05.938 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:05.938 [34/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:05.938 [35/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:05.938 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:05.938 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:05.938 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:05.938 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:05.938 [40/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:05.938 [41/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:05.938 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:05.938 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:05.938 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:05.938 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:05.938 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:05.938 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:05.938 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:05.938 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:05.938 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:05.938 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:05.938 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:05.938 [53/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:05.938 [54/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:05.938 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:05.938 [56/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:05.938 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:05.938 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:05.938 [59/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:05.938 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:05.938 [61/265] Linking static target lib/librte_meter.a 00:02:05.938 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:05.938 [63/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:05.938 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:05.938 [65/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.938 [66/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:05.938 [67/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:06.201 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:06.201 [69/265] Linking static target lib/librte_telemetry.a 00:02:06.201 [70/265] Linking static target lib/librte_ring.a 00:02:06.201 [71/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:06.201 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:06.201 [73/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:06.201 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:06.201 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:06.201 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.201 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.201 [78/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:06.201 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.201 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.201 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:06.201 [82/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:06.201 [83/265] Linking static target lib/librte_pci.a 00:02:06.201 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.201 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.201 [86/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:06.201 [87/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:06.201 [88/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.201 [89/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:06.201 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:06.201 [91/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.201 [92/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:06.201 [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:06.201 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:06.201 [95/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:06.201 [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:06.201 [97/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:06.201 [98/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:06.201 [99/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:06.201 [100/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.201 [101/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:06.201 [102/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:06.201 [103/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:06.201 [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:06.201 [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:06.201 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:06.201 [107/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:06.201 [108/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:06.201 [109/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:06.201 [110/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:06.201 [111/265] Linking static target lib/librte_mempool.a 00:02:06.201 [112/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:06.201 [113/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:06.201 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:06.201 [115/265] Linking static target lib/librte_rcu.a 00:02:06.201 [116/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:06.201 [117/265] Linking static target lib/librte_net.a 00:02:06.201 [118/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:06.460 [119/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:06.460 [120/265] Linking static target lib/librte_eal.a 00:02:06.460 [121/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.460 [122/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.460 [123/265] Linking target lib/librte_log.so.24.0 00:02:06.460 [124/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.460 [125/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.720 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.720 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.720 [128/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:06.720 [129/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:06.720 [130/265] Linking static target lib/librte_mbuf.a 00:02:06.720 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:06.720 [132/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:06.720 [133/265] Linking static target lib/librte_cmdline.a 00:02:06.720 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:06.720 [135/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:06.720 [136/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.720 [137/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:06.720 [138/265] Linking static target lib/librte_timer.a 00:02:06.720 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:06.720 [140/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.720 [141/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:06.720 [142/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:06.720 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.720 [144/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.720 [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:06.720 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:06.720 [147/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:06.720 [148/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:06.720 [149/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:06.720 [150/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:06.720 [151/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:06.720 [152/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:06.720 [153/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:06.720 [154/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:06.720 [155/265] Linking target lib/librte_telemetry.so.24.0 00:02:06.720 [156/265] Linking target lib/librte_kvargs.so.24.0 00:02:06.720 [157/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:06.720 [158/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:06.720 [159/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:06.720 [160/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:06.720 [161/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:06.720 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:06.720 [163/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:06.720 [164/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:06.720 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:06.720 [166/265] Linking static target lib/librte_compressdev.a 00:02:06.720 [167/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:06.720 [168/265] Linking static target lib/librte_dmadev.a 00:02:06.720 [169/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:06.978 [170/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:06.978 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:06.978 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:06.978 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:06.978 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:06.978 [175/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:06.978 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:06.978 [177/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:06.978 [178/265] Linking static target lib/librte_power.a 00:02:06.978 [179/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:06.978 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:06.978 [181/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:06.978 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:06.978 [183/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:06.978 [184/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:06.978 [185/265] Linking static target lib/librte_security.a 00:02:06.978 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:06.978 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:06.978 [188/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:06.978 [189/265] Linking static target lib/librte_reorder.a 00:02:06.978 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:06.978 [191/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:06.978 [192/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:06.978 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.978 [194/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:06.978 [195/265] Linking static target drivers/librte_bus_vdev.a 00:02:06.978 [196/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:06.978 [197/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.978 [198/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.236 [199/265] Linking static target lib/librte_hash.a 00:02:07.237 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:07.237 [201/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:07.237 [202/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:07.237 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:07.237 [204/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.237 [205/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:07.237 [206/265] Linking static target drivers/librte_mempool_ring.a 00:02:07.237 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.237 [208/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:07.237 [209/265] Linking static target drivers/librte_bus_pci.a 00:02:07.237 [210/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.237 [211/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.237 [212/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:07.495 [213/265] Linking static target lib/librte_cryptodev.a 00:02:07.495 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.495 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.495 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.495 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.754 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.754 [219/265] Linking static target lib/librte_ethdev.a 00:02:07.754 [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.754 [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.013 [222/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.014 [223/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.014 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.390 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:09.390 [226/265] Linking static target lib/librte_vhost.a 00:02:09.390 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.299 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.867 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.436 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.695 [231/265] Linking target lib/librte_eal.so.24.0 00:02:18.695 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:18.955 [233/265] Linking target lib/librte_meter.so.24.0 00:02:18.955 [234/265] Linking target lib/librte_timer.so.24.0 00:02:18.955 [235/265] Linking target lib/librte_ring.so.24.0 00:02:18.955 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:18.955 [237/265] Linking target lib/librte_pci.so.24.0 00:02:18.955 [238/265] Linking target lib/librte_dmadev.so.24.0 00:02:18.955 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:18.955 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:18.955 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:18.955 [242/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:18.955 [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:18.955 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:18.955 [245/265] Linking target lib/librte_rcu.so.24.0 00:02:18.955 [246/265] Linking target lib/librte_mempool.so.24.0 00:02:19.214 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:19.214 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:19.214 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:19.214 [250/265] Linking target lib/librte_mbuf.so.24.0 00:02:19.472 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:19.472 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:19.472 [253/265] Linking target lib/librte_net.so.24.0 00:02:19.472 [254/265] Linking target lib/librte_reorder.so.24.0 00:02:19.472 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:19.743 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:19.743 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:19.743 [258/265] Linking target lib/librte_hash.so.24.0 00:02:19.743 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:19.743 [260/265] Linking target lib/librte_security.so.24.0 00:02:19.743 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:19.743 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:20.002 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:20.002 [264/265] Linking target lib/librte_power.so.24.0 00:02:20.002 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:20.002 INFO: autodetecting backend as ninja 00:02:20.002 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:20.936 make[3]: '/var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a' is up to date. 00:02:20.936 CC lib/log/log.o 00:02:20.936 CC lib/ut/ut.o 00:02:20.936 CC lib/log/log_deprecated.o 00:02:20.936 CC lib/log/log_flags.o 00:02:20.936 CC lib/ut_mock/mock.o 00:02:21.195 LIB libspdk_ut.a 00:02:21.195 LIB libspdk_ut_mock.a 00:02:21.195 LIB libspdk_log.a 00:02:21.195 SO libspdk_ut.so.1.0 00:02:21.195 SO libspdk_ut_mock.so.5.0 00:02:21.195 SO libspdk_log.so.6.1 00:02:21.195 SYMLINK libspdk_ut.so 00:02:21.453 SYMLINK libspdk_ut_mock.so 00:02:21.453 SYMLINK libspdk_log.so 00:02:21.453 CC lib/util/base64.o 00:02:21.453 CC lib/util/bit_array.o 00:02:21.453 CC lib/util/crc16.o 00:02:21.453 CC lib/util/cpuset.o 00:02:21.453 CC lib/util/crc32_ieee.o 00:02:21.453 CC lib/util/crc32.o 00:02:21.453 CC lib/util/crc32c.o 00:02:21.453 CC lib/util/crc64.o 00:02:21.453 CC lib/util/fd.o 00:02:21.453 CC lib/dma/dma.o 00:02:21.453 CC lib/util/dif.o 00:02:21.453 CC lib/util/file.o 00:02:21.453 CC lib/util/hexlify.o 00:02:21.453 CC lib/util/iov.o 00:02:21.453 CC lib/util/strerror_tls.o 00:02:21.453 CC lib/util/math.o 00:02:21.453 CC lib/util/string.o 00:02:21.453 CC lib/util/pipe.o 00:02:21.453 CC lib/util/uuid.o 00:02:21.453 CC lib/util/fd_group.o 00:02:21.453 CC lib/util/xor.o 00:02:21.453 CC lib/util/zipf.o 00:02:21.712 CXX lib/trace_parser/trace.o 00:02:21.712 CC lib/ioat/ioat.o 00:02:21.712 CC lib/vfio_user/host/vfio_user_pci.o 00:02:21.712 CC lib/vfio_user/host/vfio_user.o 00:02:21.712 LIB libspdk_dma.a 00:02:21.712 SO libspdk_dma.so.3.0 00:02:21.970 LIB libspdk_ioat.a 00:02:21.970 SYMLINK libspdk_dma.so 00:02:21.970 SO libspdk_ioat.so.6.0 00:02:21.970 SYMLINK libspdk_ioat.so 00:02:21.970 LIB libspdk_vfio_user.a 00:02:21.970 SO libspdk_vfio_user.so.4.0 00:02:22.228 LIB libspdk_util.a 00:02:22.228 SYMLINK libspdk_vfio_user.so 00:02:22.228 SO libspdk_util.so.8.0 00:02:22.488 SYMLINK libspdk_util.so 00:02:22.488 LIB libspdk_trace_parser.a 00:02:22.488 SO libspdk_trace_parser.so.4.0 00:02:22.746 CC lib/json/json_parse.o 00:02:22.746 CC lib/json/json_util.o 00:02:22.746 CC lib/env_dpdk/env.o 00:02:22.746 CC lib/json/json_write.o 00:02:22.746 CC lib/env_dpdk/memory.o 00:02:22.746 CC lib/rdma/common.o 00:02:22.746 CC lib/env_dpdk/pci.o 00:02:22.746 CC lib/rdma/rdma_verbs.o 00:02:22.746 CC lib/env_dpdk/init.o 00:02:22.746 CC lib/env_dpdk/threads.o 00:02:22.746 CC lib/idxd/idxd.o 00:02:22.746 CC lib/idxd/idxd_user.o 00:02:22.746 CC lib/env_dpdk/pci_ioat.o 00:02:22.746 CC lib/env_dpdk/pci_virtio.o 00:02:22.746 CC lib/env_dpdk/pci_vmd.o 00:02:22.746 CC lib/env_dpdk/pci_idxd.o 00:02:22.746 CC lib/conf/conf.o 00:02:22.746 CC lib/env_dpdk/pci_event.o 00:02:22.746 CC lib/env_dpdk/sigbus_handler.o 00:02:22.746 CC lib/env_dpdk/pci_dpdk.o 00:02:22.746 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:22.746 SYMLINK libspdk_trace_parser.so 00:02:22.746 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:22.746 CC lib/vmd/vmd.o 00:02:22.746 CC lib/vmd/led.o 00:02:23.006 LIB libspdk_conf.a 00:02:23.006 LIB libspdk_json.a 00:02:23.006 SO libspdk_conf.so.5.0 00:02:23.006 LIB libspdk_rdma.a 00:02:23.006 SO libspdk_json.so.5.1 00:02:23.006 SO libspdk_rdma.so.5.0 00:02:23.006 SYMLINK libspdk_conf.so 00:02:23.006 SYMLINK libspdk_json.so 00:02:23.006 SYMLINK libspdk_rdma.so 00:02:23.006 LIB libspdk_idxd.a 00:02:23.265 SO libspdk_idxd.so.11.0 00:02:23.265 SYMLINK libspdk_idxd.so 00:02:23.265 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.265 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.265 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.265 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.265 LIB libspdk_vmd.a 00:02:23.265 SO libspdk_vmd.so.5.0 00:02:23.524 SYMLINK libspdk_vmd.so 00:02:23.524 LIB libspdk_jsonrpc.a 00:02:23.524 SO libspdk_jsonrpc.so.5.1 00:02:23.783 SYMLINK libspdk_jsonrpc.so 00:02:23.783 CC lib/rpc/rpc.o 00:02:24.043 LIB libspdk_env_dpdk.a 00:02:24.043 LIB libspdk_rpc.a 00:02:24.043 SO libspdk_rpc.so.5.0 00:02:24.043 SO libspdk_env_dpdk.so.13.0 00:02:24.043 SYMLINK libspdk_rpc.so 00:02:24.302 SYMLINK libspdk_env_dpdk.so 00:02:24.302 CC lib/trace/trace.o 00:02:24.302 CC lib/trace/trace_flags.o 00:02:24.302 CC lib/trace/trace_rpc.o 00:02:24.302 CC lib/sock/sock.o 00:02:24.302 CC lib/sock/sock_rpc.o 00:02:24.302 CC lib/notify/notify_rpc.o 00:02:24.302 CC lib/notify/notify.o 00:02:24.565 LIB libspdk_notify.a 00:02:24.565 LIB libspdk_trace.a 00:02:24.565 SO libspdk_notify.so.5.0 00:02:24.565 SO libspdk_trace.so.9.0 00:02:24.875 SYMLINK libspdk_notify.so 00:02:24.875 SYMLINK libspdk_trace.so 00:02:24.875 LIB libspdk_sock.a 00:02:24.875 SO libspdk_sock.so.8.0 00:02:24.875 SYMLINK libspdk_sock.so 00:02:24.875 CC lib/thread/thread.o 00:02:24.875 CC lib/thread/iobuf.o 00:02:25.134 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:25.134 CC lib/nvme/nvme_ctrlr.o 00:02:25.134 CC lib/nvme/nvme_fabric.o 00:02:25.134 CC lib/nvme/nvme_ns_cmd.o 00:02:25.134 CC lib/nvme/nvme_ns.o 00:02:25.134 CC lib/nvme/nvme_pcie.o 00:02:25.134 CC lib/nvme/nvme_pcie_common.o 00:02:25.134 CC lib/nvme/nvme_qpair.o 00:02:25.134 CC lib/nvme/nvme.o 00:02:25.134 CC lib/nvme/nvme_quirks.o 00:02:25.134 CC lib/nvme/nvme_transport.o 00:02:25.134 CC lib/nvme/nvme_discovery.o 00:02:25.134 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:25.134 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:25.134 CC lib/nvme/nvme_tcp.o 00:02:25.134 CC lib/nvme/nvme_opal.o 00:02:25.134 CC lib/nvme/nvme_io_msg.o 00:02:25.134 CC lib/nvme/nvme_poll_group.o 00:02:25.134 CC lib/nvme/nvme_zns.o 00:02:25.134 CC lib/nvme/nvme_cuse.o 00:02:25.134 CC lib/nvme/nvme_vfio_user.o 00:02:25.134 CC lib/nvme/nvme_rdma.o 00:02:26.514 LIB libspdk_thread.a 00:02:26.514 SO libspdk_thread.so.9.0 00:02:26.514 SYMLINK libspdk_thread.so 00:02:26.773 CC lib/init/subsystem.o 00:02:26.773 CC lib/init/json_config.o 00:02:26.773 CC lib/virtio/virtio.o 00:02:26.773 CC lib/accel/accel.o 00:02:26.773 CC lib/virtio/virtio_vhost_user.o 00:02:26.773 CC lib/init/subsystem_rpc.o 00:02:26.773 CC lib/virtio/virtio_vfio_user.o 00:02:26.773 CC lib/accel/accel_rpc.o 00:02:26.773 CC lib/init/rpc.o 00:02:26.773 CC lib/virtio/virtio_pci.o 00:02:26.773 CC lib/accel/accel_sw.o 00:02:26.773 CC lib/blob/blobstore.o 00:02:26.773 CC lib/blob/request.o 00:02:26.773 CC lib/blob/zeroes.o 00:02:26.773 CC lib/blob/blob_bs_dev.o 00:02:27.032 LIB libspdk_init.a 00:02:27.032 SO libspdk_init.so.4.0 00:02:27.291 LIB libspdk_virtio.a 00:02:27.291 SYMLINK libspdk_init.so 00:02:27.291 SO libspdk_virtio.so.6.0 00:02:27.291 LIB libspdk_nvme.a 00:02:27.291 SYMLINK libspdk_virtio.so 00:02:27.551 SO libspdk_nvme.so.12.0 00:02:27.551 CC lib/event/app.o 00:02:27.551 CC lib/event/reactor.o 00:02:27.551 CC lib/event/log_rpc.o 00:02:27.551 CC lib/event/app_rpc.o 00:02:27.551 CC lib/event/scheduler_static.o 00:02:27.810 SYMLINK libspdk_nvme.so 00:02:27.810 LIB libspdk_accel.a 00:02:27.810 LIB libspdk_event.a 00:02:27.810 SO libspdk_accel.so.14.0 00:02:27.810 SO libspdk_event.so.12.0 00:02:28.069 SYMLINK libspdk_accel.so 00:02:28.069 SYMLINK libspdk_event.so 00:02:28.328 CC lib/bdev/bdev.o 00:02:28.328 CC lib/bdev/bdev_rpc.o 00:02:28.329 CC lib/bdev/part.o 00:02:28.329 CC lib/bdev/bdev_zone.o 00:02:28.329 CC lib/bdev/scsi_nvme.o 00:02:28.898 LIB libspdk_blob.a 00:02:28.898 SO libspdk_blob.so.10.1 00:02:29.157 SYMLINK libspdk_blob.so 00:02:29.157 CC lib/lvol/lvol.o 00:02:29.157 CC lib/blobfs/blobfs.o 00:02:29.157 CC lib/blobfs/tree.o 00:02:30.094 LIB libspdk_lvol.a 00:02:30.094 LIB libspdk_blobfs.a 00:02:30.094 SO libspdk_lvol.so.9.1 00:02:30.094 SO libspdk_blobfs.so.9.0 00:02:30.094 SYMLINK libspdk_lvol.so 00:02:30.353 SYMLINK libspdk_blobfs.so 00:02:30.921 LIB libspdk_bdev.a 00:02:30.921 SO libspdk_bdev.so.14.0 00:02:30.921 SYMLINK libspdk_bdev.so 00:02:31.181 CC lib/nvmf/ctrlr.o 00:02:31.181 CC lib/nvmf/ctrlr_discovery.o 00:02:31.181 CC lib/scsi/dev.o 00:02:31.181 CC lib/scsi/port.o 00:02:31.181 CC lib/scsi/lun.o 00:02:31.181 CC lib/nvmf/ctrlr_bdev.o 00:02:31.181 CC lib/nvmf/subsystem.o 00:02:31.181 CC lib/scsi/scsi.o 00:02:31.181 CC lib/nvmf/nvmf.o 00:02:31.181 CC lib/scsi/scsi_bdev.o 00:02:31.181 CC lib/scsi/scsi_pr.o 00:02:31.181 CC lib/nvmf/nvmf_rpc.o 00:02:31.181 CC lib/nvmf/transport.o 00:02:31.181 CC lib/nvmf/tcp.o 00:02:31.181 CC lib/scsi/scsi_rpc.o 00:02:31.181 CC lib/nbd/nbd.o 00:02:31.181 CC lib/scsi/task.o 00:02:31.181 CC lib/nvmf/rdma.o 00:02:31.181 CC lib/nbd/nbd_rpc.o 00:02:31.181 CC lib/ftl/ftl_core.o 00:02:31.181 CC lib/ftl/ftl_layout.o 00:02:31.181 CC lib/ftl/ftl_init.o 00:02:31.181 CC lib/ftl/ftl_debug.o 00:02:31.181 CC lib/ublk/ublk.o 00:02:31.181 CC lib/ftl/ftl_io.o 00:02:31.181 CC lib/ftl/ftl_sb.o 00:02:31.181 CC lib/ftl/ftl_l2p.o 00:02:31.181 CC lib/ftl/ftl_nv_cache.o 00:02:31.181 CC lib/ftl/ftl_l2p_flat.o 00:02:31.181 CC lib/ublk/ublk_rpc.o 00:02:31.181 CC lib/ftl/ftl_band.o 00:02:31.181 CC lib/ftl/ftl_writer.o 00:02:31.181 CC lib/ftl/ftl_band_ops.o 00:02:31.181 CC lib/ftl/ftl_rq.o 00:02:31.181 CC lib/ftl/ftl_reloc.o 00:02:31.181 CC lib/ftl/ftl_l2p_cache.o 00:02:31.181 CC lib/ftl/ftl_p2l.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.181 CC lib/ftl/utils/ftl_conf.o 00:02:31.181 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.181 CC lib/ftl/utils/ftl_md.o 00:02:31.181 CC lib/ftl/utils/ftl_mempool.o 00:02:31.181 CC lib/ftl/utils/ftl_property.o 00:02:31.181 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.181 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.181 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.181 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.181 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.181 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.181 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.181 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.181 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.181 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.181 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.181 CC lib/ftl/base/ftl_base_dev.o 00:02:31.181 CC lib/ftl/ftl_trace.o 00:02:31.181 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.748 LIB libspdk_nbd.a 00:02:31.748 SO libspdk_nbd.so.6.0 00:02:32.008 SYMLINK libspdk_nbd.so 00:02:32.008 LIB libspdk_scsi.a 00:02:32.008 SO libspdk_scsi.so.8.0 00:02:32.008 LIB libspdk_ublk.a 00:02:32.267 SO libspdk_ublk.so.2.0 00:02:32.267 SYMLINK libspdk_scsi.so 00:02:32.267 SYMLINK libspdk_ublk.so 00:02:32.267 LIB libspdk_ftl.a 00:02:32.267 CC lib/iscsi/conn.o 00:02:32.267 CC lib/vhost/vhost.o 00:02:32.267 CC lib/iscsi/init_grp.o 00:02:32.267 CC lib/iscsi/iscsi.o 00:02:32.267 CC lib/iscsi/md5.o 00:02:32.267 CC lib/iscsi/param.o 00:02:32.267 CC lib/vhost/vhost_rpc.o 00:02:32.267 CC lib/vhost/vhost_scsi.o 00:02:32.267 CC lib/vhost/vhost_blk.o 00:02:32.267 CC lib/iscsi/portal_grp.o 00:02:32.267 CC lib/iscsi/tgt_node.o 00:02:32.267 CC lib/vhost/rte_vhost_user.o 00:02:32.267 CC lib/iscsi/iscsi_subsystem.o 00:02:32.267 CC lib/iscsi/iscsi_rpc.o 00:02:32.267 CC lib/iscsi/task.o 00:02:32.526 SO libspdk_ftl.so.8.0 00:02:33.093 SYMLINK libspdk_ftl.so 00:02:33.351 LIB libspdk_iscsi.a 00:02:33.351 LIB libspdk_nvmf.a 00:02:33.351 LIB libspdk_vhost.a 00:02:33.611 SO libspdk_iscsi.so.7.0 00:02:33.611 SO libspdk_vhost.so.7.1 00:02:33.611 SO libspdk_nvmf.so.17.0 00:02:33.611 SYMLINK libspdk_vhost.so 00:02:33.611 SYMLINK libspdk_iscsi.so 00:02:33.870 SYMLINK libspdk_nvmf.so 00:02:34.129 CC module/env_dpdk/env_dpdk_rpc.o 00:02:34.129 CC module/blob/bdev/blob_bdev.o 00:02:34.129 CC module/scheduler/gscheduler/gscheduler.o 00:02:34.129 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:34.129 CC module/accel/iaa/accel_iaa.o 00:02:34.129 CC module/accel/iaa/accel_iaa_rpc.o 00:02:34.129 CC module/accel/error/accel_error_rpc.o 00:02:34.129 CC module/accel/error/accel_error.o 00:02:34.129 CC module/accel/dsa/accel_dsa.o 00:02:34.129 CC module/accel/dsa/accel_dsa_rpc.o 00:02:34.129 CC module/accel/ioat/accel_ioat.o 00:02:34.129 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:34.129 CC module/accel/ioat/accel_ioat_rpc.o 00:02:34.129 CC module/sock/posix/posix.o 00:02:34.129 LIB libspdk_env_dpdk_rpc.a 00:02:34.388 SO libspdk_env_dpdk_rpc.so.5.0 00:02:34.388 SYMLINK libspdk_env_dpdk_rpc.so 00:02:34.388 LIB libspdk_scheduler_gscheduler.a 00:02:34.388 LIB libspdk_accel_error.a 00:02:34.388 LIB libspdk_scheduler_dpdk_governor.a 00:02:34.388 SO libspdk_scheduler_gscheduler.so.3.0 00:02:34.388 LIB libspdk_scheduler_dynamic.a 00:02:34.388 SO libspdk_accel_error.so.1.0 00:02:34.388 LIB libspdk_accel_iaa.a 00:02:34.388 LIB libspdk_accel_ioat.a 00:02:34.388 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:34.388 SO libspdk_scheduler_dynamic.so.3.0 00:02:34.388 LIB libspdk_blob_bdev.a 00:02:34.388 SO libspdk_accel_ioat.so.5.0 00:02:34.388 SYMLINK libspdk_scheduler_gscheduler.so 00:02:34.388 SO libspdk_accel_iaa.so.2.0 00:02:34.388 LIB libspdk_accel_dsa.a 00:02:34.388 SYMLINK libspdk_accel_error.so 00:02:34.388 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:34.388 SO libspdk_blob_bdev.so.10.1 00:02:34.647 SO libspdk_accel_dsa.so.4.0 00:02:34.647 SYMLINK libspdk_scheduler_dynamic.so 00:02:34.647 SYMLINK libspdk_accel_ioat.so 00:02:34.647 SYMLINK libspdk_accel_iaa.so 00:02:34.647 SYMLINK libspdk_blob_bdev.so 00:02:34.647 SYMLINK libspdk_accel_dsa.so 00:02:34.906 CC module/bdev/delay/vbdev_delay.o 00:02:34.906 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:34.906 CC module/blobfs/bdev/blobfs_bdev.o 00:02:34.906 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:34.906 CC module/bdev/malloc/bdev_malloc.o 00:02:34.906 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:34.906 CC module/bdev/error/vbdev_error.o 00:02:34.906 CC module/bdev/error/vbdev_error_rpc.o 00:02:34.906 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.906 CC module/bdev/null/bdev_null.o 00:02:34.906 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.906 CC module/bdev/null/bdev_null_rpc.o 00:02:34.906 CC module/bdev/lvol/vbdev_lvol.o 00:02:34.906 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:34.906 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:34.906 CC module/bdev/nvme/bdev_nvme.o 00:02:34.906 CC module/bdev/nvme/nvme_rpc.o 00:02:34.906 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:34.906 CC module/bdev/nvme/bdev_mdns_client.o 00:02:34.906 CC module/bdev/ftl/bdev_ftl.o 00:02:34.906 CC module/bdev/nvme/vbdev_opal.o 00:02:34.906 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:34.906 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.906 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.906 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:34.906 CC module/bdev/split/vbdev_split.o 00:02:34.906 CC module/bdev/raid/bdev_raid.o 00:02:34.906 CC module/bdev/split/vbdev_split_rpc.o 00:02:34.906 CC module/bdev/gpt/gpt.o 00:02:34.906 CC module/bdev/raid/bdev_raid_sb.o 00:02:34.906 CC module/bdev/raid/raid1.o 00:02:34.906 CC module/bdev/raid/raid0.o 00:02:34.906 CC module/bdev/raid/bdev_raid_rpc.o 00:02:34.906 CC module/bdev/raid/concat.o 00:02:34.906 CC module/bdev/gpt/vbdev_gpt.o 00:02:34.906 CC module/bdev/aio/bdev_aio.o 00:02:34.906 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.906 CC module/bdev/passthru/vbdev_passthru.o 00:02:34.906 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:34.906 CC module/bdev/ocf/data.o 00:02:34.906 CC module/bdev/ocf/stats.o 00:02:34.906 CC module/bdev/ocf/ctx.o 00:02:34.906 CC module/bdev/ocf/utils.o 00:02:34.906 CC module/bdev/ocf/vbdev_ocf.o 00:02:34.906 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:34.906 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.906 CC module/bdev/ocf/volume.o 00:02:34.906 CC module/bdev/ocf/vbdev_ocf_rpc.o 00:02:34.906 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.906 LIB libspdk_sock_posix.a 00:02:34.906 SO libspdk_sock_posix.so.5.0 00:02:35.164 SYMLINK libspdk_sock_posix.so 00:02:35.164 LIB libspdk_bdev_split.a 00:02:35.164 SO libspdk_bdev_split.so.5.0 00:02:35.164 LIB libspdk_bdev_error.a 00:02:35.164 LIB libspdk_blobfs_bdev.a 00:02:35.164 LIB libspdk_bdev_gpt.a 00:02:35.164 SO libspdk_blobfs_bdev.so.5.0 00:02:35.164 SO libspdk_bdev_error.so.5.0 00:02:35.164 LIB libspdk_bdev_ftl.a 00:02:35.164 LIB libspdk_bdev_aio.a 00:02:35.164 SYMLINK libspdk_bdev_split.so 00:02:35.164 SO libspdk_bdev_gpt.so.5.0 00:02:35.424 LIB libspdk_bdev_null.a 00:02:35.424 SO libspdk_bdev_ftl.so.5.0 00:02:35.424 SO libspdk_bdev_aio.so.5.0 00:02:35.424 SYMLINK libspdk_bdev_error.so 00:02:35.424 SO libspdk_bdev_null.so.5.0 00:02:35.424 SYMLINK libspdk_blobfs_bdev.so 00:02:35.424 SYMLINK libspdk_bdev_gpt.so 00:02:35.424 LIB libspdk_bdev_iscsi.a 00:02:35.424 SYMLINK libspdk_bdev_ftl.so 00:02:35.424 LIB libspdk_bdev_malloc.a 00:02:35.424 SYMLINK libspdk_bdev_null.so 00:02:35.424 SYMLINK libspdk_bdev_aio.so 00:02:35.424 LIB libspdk_bdev_zone_block.a 00:02:35.424 LIB libspdk_bdev_passthru.a 00:02:35.424 LIB libspdk_bdev_delay.a 00:02:35.424 SO libspdk_bdev_iscsi.so.5.0 00:02:35.424 SO libspdk_bdev_malloc.so.5.0 00:02:35.424 LIB libspdk_bdev_lvol.a 00:02:35.424 SO libspdk_bdev_zone_block.so.5.0 00:02:35.424 SO libspdk_bdev_delay.so.5.0 00:02:35.424 LIB libspdk_bdev_ocf.a 00:02:35.424 SO libspdk_bdev_passthru.so.5.0 00:02:35.424 SO libspdk_bdev_lvol.so.5.0 00:02:35.424 SYMLINK libspdk_bdev_iscsi.so 00:02:35.424 SYMLINK libspdk_bdev_malloc.so 00:02:35.424 SYMLINK libspdk_bdev_zone_block.so 00:02:35.424 SYMLINK libspdk_bdev_delay.so 00:02:35.424 SO libspdk_bdev_ocf.so.5.0 00:02:35.682 SYMLINK libspdk_bdev_passthru.so 00:02:35.682 LIB libspdk_bdev_virtio.a 00:02:35.682 SYMLINK libspdk_bdev_lvol.so 00:02:35.682 SO libspdk_bdev_virtio.so.5.0 00:02:35.682 SYMLINK libspdk_bdev_ocf.so 00:02:35.682 SYMLINK libspdk_bdev_virtio.so 00:02:35.940 LIB libspdk_bdev_raid.a 00:02:35.941 SO libspdk_bdev_raid.so.5.0 00:02:35.941 SYMLINK libspdk_bdev_raid.so 00:02:37.318 LIB libspdk_bdev_nvme.a 00:02:37.318 SO libspdk_bdev_nvme.so.6.0 00:02:37.318 SYMLINK libspdk_bdev_nvme.so 00:02:37.916 CC module/event/subsystems/sock/sock.o 00:02:37.916 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.916 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.916 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.916 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.916 CC module/event/subsystems/vmd/vmd.o 00:02:37.916 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.916 LIB libspdk_event_sock.a 00:02:37.916 LIB libspdk_event_scheduler.a 00:02:37.916 LIB libspdk_event_vhost_blk.a 00:02:37.916 LIB libspdk_event_iobuf.a 00:02:37.916 SO libspdk_event_sock.so.4.0 00:02:37.916 LIB libspdk_event_vmd.a 00:02:37.916 SO libspdk_event_scheduler.so.3.0 00:02:37.916 SO libspdk_event_vhost_blk.so.2.0 00:02:37.916 SO libspdk_event_iobuf.so.2.0 00:02:37.916 SO libspdk_event_vmd.so.5.0 00:02:38.175 SYMLINK libspdk_event_sock.so 00:02:38.175 SYMLINK libspdk_event_scheduler.so 00:02:38.175 SYMLINK libspdk_event_vhost_blk.so 00:02:38.175 SYMLINK libspdk_event_iobuf.so 00:02:38.175 SYMLINK libspdk_event_vmd.so 00:02:38.433 CC module/event/subsystems/accel/accel.o 00:02:38.433 LIB libspdk_event_accel.a 00:02:38.433 SO libspdk_event_accel.so.5.0 00:02:38.693 SYMLINK libspdk_event_accel.so 00:02:38.952 CC module/event/subsystems/bdev/bdev.o 00:02:38.952 LIB libspdk_event_bdev.a 00:02:39.211 SO libspdk_event_bdev.so.5.0 00:02:39.211 SYMLINK libspdk_event_bdev.so 00:02:39.470 CC module/event/subsystems/nbd/nbd.o 00:02:39.470 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.470 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.470 CC module/event/subsystems/scsi/scsi.o 00:02:39.470 CC module/event/subsystems/ublk/ublk.o 00:02:39.470 LIB libspdk_event_nbd.a 00:02:39.470 SO libspdk_event_nbd.so.5.0 00:02:39.470 LIB libspdk_event_scsi.a 00:02:39.470 LIB libspdk_event_ublk.a 00:02:39.470 SYMLINK libspdk_event_nbd.so 00:02:39.729 SO libspdk_event_scsi.so.5.0 00:02:39.729 SO libspdk_event_ublk.so.2.0 00:02:39.729 LIB libspdk_event_nvmf.a 00:02:39.729 SYMLINK libspdk_event_ublk.so 00:02:39.729 SYMLINK libspdk_event_scsi.so 00:02:39.729 SO libspdk_event_nvmf.so.5.0 00:02:39.729 SYMLINK libspdk_event_nvmf.so 00:02:39.988 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:39.988 CC module/event/subsystems/iscsi/iscsi.o 00:02:39.988 LIB libspdk_event_vhost_scsi.a 00:02:39.988 LIB libspdk_event_iscsi.a 00:02:40.248 SO libspdk_event_vhost_scsi.so.2.0 00:02:40.248 SO libspdk_event_iscsi.so.5.0 00:02:40.248 SYMLINK libspdk_event_vhost_scsi.so 00:02:40.248 SYMLINK libspdk_event_iscsi.so 00:02:40.248 SO libspdk.so.5.0 00:02:40.248 SYMLINK libspdk.so 00:02:40.512 CXX app/trace/trace.o 00:02:40.512 CC app/trace_record/trace_record.o 00:02:40.512 CC app/spdk_lspci/spdk_lspci.o 00:02:40.512 CC app/spdk_nvme_discover/discovery_aer.o 00:02:40.512 CC app/spdk_top/spdk_top.o 00:02:40.512 CC app/spdk_nvme_identify/identify.o 00:02:40.512 CC app/spdk_nvme_perf/perf.o 00:02:40.512 TEST_HEADER include/spdk/accel.h 00:02:40.512 TEST_HEADER include/spdk/accel_module.h 00:02:40.512 CC test/rpc_client/rpc_client_test.o 00:02:40.512 TEST_HEADER include/spdk/assert.h 00:02:40.512 TEST_HEADER include/spdk/barrier.h 00:02:40.512 TEST_HEADER include/spdk/base64.h 00:02:40.512 TEST_HEADER include/spdk/bdev.h 00:02:40.512 TEST_HEADER include/spdk/bdev_module.h 00:02:40.512 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.512 TEST_HEADER include/spdk/bit_array.h 00:02:40.512 TEST_HEADER include/spdk/bit_pool.h 00:02:40.512 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.512 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.512 TEST_HEADER include/spdk/blobfs.h 00:02:40.512 TEST_HEADER include/spdk/blob.h 00:02:40.776 TEST_HEADER include/spdk/conf.h 00:02:40.777 TEST_HEADER include/spdk/config.h 00:02:40.777 TEST_HEADER include/spdk/cpuset.h 00:02:40.777 TEST_HEADER include/spdk/crc16.h 00:02:40.777 CC app/spdk_dd/spdk_dd.o 00:02:40.777 TEST_HEADER include/spdk/crc32.h 00:02:40.777 TEST_HEADER include/spdk/crc64.h 00:02:40.777 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.777 TEST_HEADER include/spdk/dif.h 00:02:40.777 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.777 TEST_HEADER include/spdk/dma.h 00:02:40.777 CC app/nvmf_tgt/nvmf_main.o 00:02:40.777 TEST_HEADER include/spdk/endian.h 00:02:40.777 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.777 TEST_HEADER include/spdk/env.h 00:02:40.777 CC app/vhost/vhost.o 00:02:40.777 TEST_HEADER include/spdk/event.h 00:02:40.777 TEST_HEADER include/spdk/fd_group.h 00:02:40.777 CC examples/ioat/perf/perf.o 00:02:40.777 TEST_HEADER include/spdk/fd.h 00:02:40.777 CC examples/vmd/led/led.o 00:02:40.777 CC examples/idxd/perf/perf.o 00:02:40.777 TEST_HEADER include/spdk/file.h 00:02:40.777 CC test/nvme/aer/aer.o 00:02:40.777 CC examples/nvme/abort/abort.o 00:02:40.777 TEST_HEADER include/spdk/ftl.h 00:02:40.777 CC examples/sock/hello_world/hello_sock.o 00:02:40.777 CC examples/vmd/lsvmd/lsvmd.o 00:02:40.777 CC test/event/reactor_perf/reactor_perf.o 00:02:40.777 CC app/fio/nvme/fio_plugin.o 00:02:40.777 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.777 CC examples/nvme/hello_world/hello_world.o 00:02:40.777 CC examples/nvme/hotplug/hotplug.o 00:02:40.777 CC examples/nvme/reconnect/reconnect.o 00:02:40.777 TEST_HEADER include/spdk/hexlify.h 00:02:40.777 CC examples/ioat/verify/verify.o 00:02:40.777 CC test/event/event_perf/event_perf.o 00:02:40.777 CC examples/accel/perf/accel_perf.o 00:02:40.777 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:40.777 CC test/env/vtophys/vtophys.o 00:02:40.777 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:40.777 CC test/env/pci/pci_ut.o 00:02:40.777 CC test/nvme/sgl/sgl.o 00:02:40.777 CC test/app/stub/stub.o 00:02:40.777 TEST_HEADER include/spdk/histogram_data.h 00:02:40.777 CC test/app/jsoncat/jsoncat.o 00:02:40.777 CC test/nvme/reserve/reserve.o 00:02:40.777 CC examples/nvme/arbitration/arbitration.o 00:02:40.777 TEST_HEADER include/spdk/idxd.h 00:02:40.777 CC test/event/reactor/reactor.o 00:02:40.777 CC test/nvme/e2edp/nvme_dp.o 00:02:40.777 CC app/spdk_tgt/spdk_tgt.o 00:02:40.777 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:40.777 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:40.777 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.777 CC test/nvme/overhead/overhead.o 00:02:40.777 CC test/thread/poller_perf/poller_perf.o 00:02:40.777 TEST_HEADER include/spdk/init.h 00:02:40.777 CC test/nvme/err_injection/err_injection.o 00:02:40.777 TEST_HEADER include/spdk/ioat.h 00:02:40.777 CC test/app/histogram_perf/histogram_perf.o 00:02:40.777 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.777 CC test/env/memory/memory_ut.o 00:02:40.777 CC test/nvme/compliance/nvme_compliance.o 00:02:40.777 CC examples/util/zipf/zipf.o 00:02:40.777 CC test/nvme/connect_stress/connect_stress.o 00:02:40.777 CC test/nvme/startup/startup.o 00:02:40.777 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.777 CC test/nvme/reset/reset.o 00:02:40.777 TEST_HEADER include/spdk/json.h 00:02:40.777 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.777 CC test/nvme/simple_copy/simple_copy.o 00:02:40.777 TEST_HEADER include/spdk/likely.h 00:02:40.777 CC test/event/app_repeat/app_repeat.o 00:02:40.777 TEST_HEADER include/spdk/log.h 00:02:40.777 CC test/nvme/boot_partition/boot_partition.o 00:02:40.777 CC test/blobfs/mkfs/mkfs.o 00:02:40.777 TEST_HEADER include/spdk/lvol.h 00:02:40.777 TEST_HEADER include/spdk/memory.h 00:02:40.777 CC examples/bdev/hello_world/hello_bdev.o 00:02:40.777 TEST_HEADER include/spdk/mmio.h 00:02:40.777 CC examples/nvmf/nvmf/nvmf.o 00:02:40.777 CC test/bdev/bdevio/bdevio.o 00:02:40.777 TEST_HEADER include/spdk/nbd.h 00:02:40.777 CC examples/thread/thread/thread_ex.o 00:02:40.777 CC examples/bdev/bdevperf/bdevperf.o 00:02:40.777 TEST_HEADER include/spdk/notify.h 00:02:40.777 CC test/accel/dif/dif.o 00:02:40.777 CC examples/blob/cli/blobcli.o 00:02:40.777 TEST_HEADER include/spdk/nvme.h 00:02:40.777 CC app/fio/bdev/fio_plugin.o 00:02:40.777 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.777 CC examples/blob/hello_world/hello_blob.o 00:02:40.777 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.777 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.777 CC test/dma/test_dma/test_dma.o 00:02:40.777 CC test/app/bdev_svc/bdev_svc.o 00:02:40.777 CC test/event/scheduler/scheduler.o 00:02:40.777 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.777 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.777 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.777 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.777 TEST_HEADER include/spdk/nvmf.h 00:02:40.777 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.777 CC test/lvol/esnap/esnap.o 00:02:40.777 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.777 CC test/env/mem_callbacks/mem_callbacks.o 00:02:40.777 TEST_HEADER include/spdk/opal.h 00:02:40.777 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:40.777 TEST_HEADER include/spdk/opal_spec.h 00:02:40.777 TEST_HEADER include/spdk/pci_ids.h 00:02:40.777 TEST_HEADER include/spdk/pipe.h 00:02:40.777 TEST_HEADER include/spdk/queue.h 00:02:41.038 TEST_HEADER include/spdk/reduce.h 00:02:41.038 TEST_HEADER include/spdk/rpc.h 00:02:41.038 TEST_HEADER include/spdk/scheduler.h 00:02:41.038 LINK spdk_lspci 00:02:41.038 TEST_HEADER include/spdk/scsi.h 00:02:41.038 TEST_HEADER include/spdk/scsi_spec.h 00:02:41.038 TEST_HEADER include/spdk/sock.h 00:02:41.038 TEST_HEADER include/spdk/stdinc.h 00:02:41.038 TEST_HEADER include/spdk/string.h 00:02:41.038 TEST_HEADER include/spdk/thread.h 00:02:41.038 TEST_HEADER include/spdk/trace.h 00:02:41.038 TEST_HEADER include/spdk/trace_parser.h 00:02:41.038 TEST_HEADER include/spdk/tree.h 00:02:41.038 TEST_HEADER include/spdk/ublk.h 00:02:41.038 TEST_HEADER include/spdk/util.h 00:02:41.038 TEST_HEADER include/spdk/uuid.h 00:02:41.038 LINK rpc_client_test 00:02:41.038 TEST_HEADER include/spdk/version.h 00:02:41.038 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:41.038 LINK spdk_nvme_discover 00:02:41.038 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:41.038 TEST_HEADER include/spdk/vhost.h 00:02:41.038 TEST_HEADER include/spdk/vmd.h 00:02:41.038 TEST_HEADER include/spdk/xor.h 00:02:41.038 TEST_HEADER include/spdk/zipf.h 00:02:41.038 CXX test/cpp_headers/accel.o 00:02:41.038 LINK reactor_perf 00:02:41.038 LINK lsvmd 00:02:41.038 LINK reactor 00:02:41.038 LINK event_perf 00:02:41.038 LINK vtophys 00:02:41.038 LINK interrupt_tgt 00:02:41.038 LINK spdk_trace_record 00:02:41.038 LINK poller_perf 00:02:41.038 LINK histogram_perf 00:02:41.038 LINK pmr_persistence 00:02:41.038 LINK led 00:02:41.038 LINK jsoncat 00:02:41.038 LINK app_repeat 00:02:41.038 LINK nvmf_tgt 00:02:41.038 LINK vhost 00:02:41.038 LINK zipf 00:02:41.038 LINK verify 00:02:41.038 LINK startup 00:02:41.038 LINK connect_stress 00:02:41.038 LINK err_injection 00:02:41.038 LINK boot_partition 00:02:41.038 LINK stub 00:02:41.038 LINK cmb_copy 00:02:41.038 LINK iscsi_tgt 00:02:41.310 LINK ioat_perf 00:02:41.310 LINK env_dpdk_post_init 00:02:41.310 LINK spdk_tgt 00:02:41.310 LINK hotplug 00:02:41.310 LINK bdev_svc 00:02:41.310 LINK mkfs 00:02:41.310 LINK hello_sock 00:02:41.310 LINK nvme_dp 00:02:41.310 LINK reserve 00:02:41.310 LINK hello_world 00:02:41.310 LINK simple_copy 00:02:41.310 LINK hello_bdev 00:02:41.310 LINK hello_blob 00:02:41.310 LINK overhead 00:02:41.310 LINK sgl 00:02:41.310 LINK scheduler 00:02:41.310 LINK thread 00:02:41.310 LINK arbitration 00:02:41.310 LINK reconnect 00:02:41.310 LINK reset 00:02:41.310 LINK aer 00:02:41.310 CXX test/cpp_headers/accel_module.o 00:02:41.310 LINK spdk_trace 00:02:41.310 LINK nvme_compliance 00:02:41.310 LINK idxd_perf 00:02:41.310 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.310 CXX test/cpp_headers/assert.o 00:02:41.310 LINK spdk_dd 00:02:41.310 CXX test/cpp_headers/barrier.o 00:02:41.310 CXX test/cpp_headers/base64.o 00:02:41.310 CXX test/cpp_headers/bdev.o 00:02:41.310 CXX test/cpp_headers/bdev_module.o 00:02:41.570 CXX test/cpp_headers/bdev_zone.o 00:02:41.570 LINK abort 00:02:41.570 LINK nvmf 00:02:41.570 CXX test/cpp_headers/bit_array.o 00:02:41.570 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.570 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.570 CXX test/cpp_headers/bit_pool.o 00:02:41.570 CXX test/cpp_headers/blob_bdev.o 00:02:41.570 CXX test/cpp_headers/blobfs_bdev.o 00:02:41.570 CXX test/cpp_headers/blobfs.o 00:02:41.570 CXX test/cpp_headers/blob.o 00:02:41.570 CXX test/cpp_headers/conf.o 00:02:41.570 CC test/nvme/fused_ordering/fused_ordering.o 00:02:41.570 CXX test/cpp_headers/config.o 00:02:41.570 CXX test/cpp_headers/cpuset.o 00:02:41.570 CXX test/cpp_headers/crc16.o 00:02:41.570 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:41.570 LINK pci_ut 00:02:41.570 CXX test/cpp_headers/crc32.o 00:02:41.570 CC test/nvme/fdp/fdp.o 00:02:41.570 CXX test/cpp_headers/crc64.o 00:02:41.570 CXX test/cpp_headers/dif.o 00:02:41.570 CXX test/cpp_headers/dma.o 00:02:41.570 CC test/nvme/cuse/cuse.o 00:02:41.570 CXX test/cpp_headers/endian.o 00:02:41.570 CXX test/cpp_headers/env_dpdk.o 00:02:41.570 CXX test/cpp_headers/env.o 00:02:41.570 LINK dif 00:02:41.570 CXX test/cpp_headers/event.o 00:02:41.570 LINK bdevio 00:02:41.570 CXX test/cpp_headers/fd_group.o 00:02:41.570 CXX test/cpp_headers/fd.o 00:02:41.570 CXX test/cpp_headers/file.o 00:02:41.570 LINK nvme_fuzz 00:02:41.570 LINK test_dma 00:02:41.570 CXX test/cpp_headers/ftl.o 00:02:41.570 LINK accel_perf 00:02:41.570 CXX test/cpp_headers/gpt_spec.o 00:02:41.570 LINK nvme_manage 00:02:41.570 CXX test/cpp_headers/hexlify.o 00:02:41.570 CXX test/cpp_headers/histogram_data.o 00:02:41.570 CXX test/cpp_headers/idxd.o 00:02:41.831 CXX test/cpp_headers/idxd_spec.o 00:02:41.831 CXX test/cpp_headers/init.o 00:02:41.831 CXX test/cpp_headers/ioat.o 00:02:41.831 CXX test/cpp_headers/ioat_spec.o 00:02:41.831 LINK spdk_nvme 00:02:41.831 CXX test/cpp_headers/json.o 00:02:41.831 CXX test/cpp_headers/iscsi_spec.o 00:02:41.831 LINK blobcli 00:02:41.831 CXX test/cpp_headers/jsonrpc.o 00:02:41.831 CXX test/cpp_headers/likely.o 00:02:41.831 LINK spdk_bdev 00:02:41.831 CXX test/cpp_headers/log.o 00:02:41.831 CXX test/cpp_headers/lvol.o 00:02:41.831 CXX test/cpp_headers/memory.o 00:02:41.831 CXX test/cpp_headers/nbd.o 00:02:41.831 CXX test/cpp_headers/mmio.o 00:02:41.831 CXX test/cpp_headers/notify.o 00:02:41.831 CXX test/cpp_headers/nvme.o 00:02:41.831 LINK doorbell_aers 00:02:41.831 LINK mem_callbacks 00:02:41.831 CXX test/cpp_headers/nvme_intel.o 00:02:41.831 CXX test/cpp_headers/nvme_ocssd.o 00:02:41.831 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:41.831 CXX test/cpp_headers/nvme_spec.o 00:02:41.831 CXX test/cpp_headers/nvme_zns.o 00:02:42.119 CXX test/cpp_headers/nvmf_cmd.o 00:02:42.119 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:42.119 CXX test/cpp_headers/nvmf.o 00:02:42.119 CXX test/cpp_headers/nvmf_spec.o 00:02:42.119 CXX test/cpp_headers/nvmf_transport.o 00:02:42.119 CXX test/cpp_headers/opal.o 00:02:42.119 CXX test/cpp_headers/opal_spec.o 00:02:42.119 CXX test/cpp_headers/pci_ids.o 00:02:42.119 CXX test/cpp_headers/pipe.o 00:02:42.119 LINK fused_ordering 00:02:42.119 CXX test/cpp_headers/queue.o 00:02:42.119 CXX test/cpp_headers/reduce.o 00:02:42.119 CXX test/cpp_headers/rpc.o 00:02:42.119 CXX test/cpp_headers/scheduler.o 00:02:42.119 CXX test/cpp_headers/scsi.o 00:02:42.119 CXX test/cpp_headers/scsi_spec.o 00:02:42.119 CXX test/cpp_headers/sock.o 00:02:42.119 CXX test/cpp_headers/stdinc.o 00:02:42.119 CXX test/cpp_headers/string.o 00:02:42.119 CXX test/cpp_headers/thread.o 00:02:42.119 CXX test/cpp_headers/trace.o 00:02:42.119 CXX test/cpp_headers/trace_parser.o 00:02:42.119 CXX test/cpp_headers/tree.o 00:02:42.119 CXX test/cpp_headers/ublk.o 00:02:42.119 CXX test/cpp_headers/util.o 00:02:42.119 LINK spdk_nvme_perf 00:02:42.119 LINK fdp 00:02:42.119 LINK spdk_nvme_identify 00:02:42.119 CXX test/cpp_headers/uuid.o 00:02:42.119 CXX test/cpp_headers/version.o 00:02:42.119 LINK spdk_top 00:02:42.119 LINK bdevperf 00:02:42.119 CXX test/cpp_headers/vfio_user_pci.o 00:02:42.119 CXX test/cpp_headers/vhost.o 00:02:42.119 CXX test/cpp_headers/vfio_user_spec.o 00:02:42.119 CXX test/cpp_headers/vmd.o 00:02:42.421 CXX test/cpp_headers/xor.o 00:02:42.421 CXX test/cpp_headers/zipf.o 00:02:42.421 LINK vhost_fuzz 00:02:42.421 LINK memory_ut 00:02:42.989 LINK cuse 00:02:43.927 LINK iscsi_fuzz 00:02:46.462 LINK esnap 00:02:46.720 00:02:46.720 real 0m52.003s 00:02:46.720 user 8m18.924s 00:02:46.720 sys 3m21.851s 00:02:46.720 19:57:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:46.721 19:57:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.721 ************************************ 00:02:46.721 END TEST make 00:02:46.721 ************************************ 00:02:46.980 19:57:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:02:46.980 19:57:44 -- nvmf/common.sh@7 -- # uname -s 00:02:46.980 19:57:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.980 19:57:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.980 19:57:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.980 19:57:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.980 19:57:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.980 19:57:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.980 19:57:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.980 19:57:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.980 19:57:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.980 19:57:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.980 19:57:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e 00:02:46.980 19:57:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e 00:02:46.980 19:57:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.980 19:57:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.980 19:57:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:46.980 19:57:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:02:46.980 19:57:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.980 19:57:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.980 19:57:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.980 19:57:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.980 19:57:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.980 19:57:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.980 19:57:44 -- paths/export.sh@5 -- # export PATH 00:02:46.980 19:57:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.980 19:57:44 -- nvmf/common.sh@46 -- # : 0 00:02:46.980 19:57:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:46.980 19:57:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:46.980 19:57:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:46.980 19:57:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.980 19:57:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.980 19:57:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:46.980 19:57:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:46.980 19:57:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:46.980 19:57:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.980 19:57:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.980 19:57:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.980 19:57:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.980 19:57:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:02:46.980 19:57:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.980 19:57:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps 00:02:46.980 19:57:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.980 19:57:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.980 19:57:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.980 19:57:44 -- spdk/autotest.sh@48 -- # udevadm_pid=1997799 00:02:46.980 19:57:44 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power 00:02:46.980 19:57:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.980 19:57:44 -- spdk/autotest.sh@54 -- # echo 1997801 00:02:46.980 19:57:44 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power 00:02:46.980 19:57:44 -- spdk/autotest.sh@56 -- # echo 1997802 00:02:46.980 19:57:44 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power 00:02:46.980 19:57:44 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:46.980 19:57:44 -- spdk/autotest.sh@60 -- # echo 1997803 00:02:46.980 19:57:44 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l 00:02:46.980 19:57:44 -- spdk/autotest.sh@62 -- # echo 1997804 00:02:46.980 19:57:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:46.980 19:57:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:46.980 19:57:44 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l 00:02:46.980 19:57:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:46.980 19:57:44 -- common/autotest_common.sh@10 -- # set +x 00:02:46.980 19:57:44 -- spdk/autotest.sh@70 -- # create_test_list 00:02:46.980 19:57:44 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:46.980 19:57:44 -- common/autotest_common.sh@10 -- # set +x 00:02:46.980 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:46.980 Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:46.980 19:57:44 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh 00:02:46.981 19:57:44 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk 00:02:46.981 19:57:44 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:02:46.981 19:57:44 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:02:46.981 19:57:44 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:02:46.981 19:57:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:46.981 19:57:44 -- common/autotest_common.sh@1440 -- # uname 00:02:46.981 19:57:44 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:46.981 19:57:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:46.981 19:57:44 -- common/autotest_common.sh@1460 -- # uname 00:02:46.981 19:57:44 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:46.981 19:57:44 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:46.981 19:57:44 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:46.981 19:57:44 -- spdk/autotest.sh@83 -- # hash lcov 00:02:46.981 19:57:44 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:46.981 19:57:44 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:46.981 --rc lcov_branch_coverage=1 00:02:46.981 --rc lcov_function_coverage=1 00:02:46.981 --rc genhtml_branch_coverage=1 00:02:46.981 --rc genhtml_function_coverage=1 00:02:46.981 --rc genhtml_legend=1 00:02:46.981 --rc geninfo_all_blocks=1 00:02:46.981 ' 00:02:46.981 19:57:44 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:46.981 --rc lcov_branch_coverage=1 00:02:46.981 --rc lcov_function_coverage=1 00:02:46.981 --rc genhtml_branch_coverage=1 00:02:46.981 --rc genhtml_function_coverage=1 00:02:46.981 --rc genhtml_legend=1 00:02:46.981 --rc geninfo_all_blocks=1 00:02:46.981 ' 00:02:46.981 19:57:44 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:46.981 --rc lcov_branch_coverage=1 00:02:46.981 --rc lcov_function_coverage=1 00:02:46.981 --rc genhtml_branch_coverage=1 00:02:46.981 --rc genhtml_function_coverage=1 00:02:46.981 --rc genhtml_legend=1 00:02:46.981 --rc geninfo_all_blocks=1 00:02:46.981 --no-external' 00:02:46.981 19:57:44 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:46.981 --rc lcov_branch_coverage=1 00:02:46.981 --rc lcov_function_coverage=1 00:02:46.981 --rc genhtml_branch_coverage=1 00:02:46.981 --rc genhtml_function_coverage=1 00:02:46.981 --rc genhtml_legend=1 00:02:46.981 --rc geninfo_all_blocks=1 00:02:46.981 --no-external' 00:02:46.981 19:57:44 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:47.239 lcov: LCOV version 1.14 00:02:47.239 19:57:44 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvme-phy-autotest/spdk -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info 00:02:55.373 /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:55.373 /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:55.373 /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.373 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:17.324 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:17.324 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:17.325 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:17.325 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:17.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:17.326 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:17.326 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:18.705 19:58:16 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:03:18.705 19:58:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:18.705 19:58:16 -- common/autotest_common.sh@10 -- # set +x 00:03:18.705 19:58:16 -- spdk/autotest.sh@102 -- # rm -f 00:03:18.705 19:58:16 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.997 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:21.997 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:21.997 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:21.997 19:58:19 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:21.997 19:58:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:21.997 19:58:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:21.997 19:58:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:21.997 19:58:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:21.997 19:58:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:21.997 19:58:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:21.997 19:58:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.997 19:58:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:21.997 19:58:19 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:22.257 19:58:19 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:22.257 19:58:19 -- spdk/autotest.sh@121 -- # grep -v p 00:03:22.257 19:58:19 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:22.257 19:58:19 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:22.257 19:58:19 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:22.257 19:58:19 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:22.257 19:58:19 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:22.257 No valid GPT data, bailing 00:03:22.257 19:58:20 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:22.257 19:58:20 -- scripts/common.sh@393 -- # pt= 00:03:22.257 19:58:20 -- scripts/common.sh@394 -- # return 1 00:03:22.257 19:58:20 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:22.257 1+0 records in 00:03:22.257 1+0 records out 00:03:22.257 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00724227 s, 145 MB/s 00:03:22.257 19:58:20 -- spdk/autotest.sh@129 -- # sync 00:03:22.257 19:58:20 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:22.257 19:58:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:22.257 19:58:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:27.560 19:58:25 -- spdk/autotest.sh@135 -- # uname -s 00:03:27.560 19:58:25 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:27.560 19:58:25 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh 00:03:27.560 19:58:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.560 19:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.560 19:58:25 -- common/autotest_common.sh@10 -- # set +x 00:03:27.560 ************************************ 00:03:27.560 START TEST setup.sh 00:03:27.560 ************************************ 00:03:27.560 19:58:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh 00:03:27.560 * Looking for test storage... 00:03:27.560 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup 00:03:27.560 19:58:25 -- setup/test-setup.sh@10 -- # uname -s 00:03:27.560 19:58:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:27.560 19:58:25 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh 00:03:27.560 19:58:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:27.560 19:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:27.560 19:58:25 -- common/autotest_common.sh@10 -- # set +x 00:03:27.560 ************************************ 00:03:27.560 START TEST acl 00:03:27.560 ************************************ 00:03:27.560 19:58:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh 00:03:27.560 * Looking for test storage... 00:03:27.560 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup 00:03:27.560 19:58:25 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:27.560 19:58:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:27.560 19:58:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:27.560 19:58:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:27.560 19:58:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:27.560 19:58:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:27.560 19:58:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:27.560 19:58:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.560 19:58:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:27.560 19:58:25 -- setup/acl.sh@12 -- # devs=() 00:03:27.560 19:58:25 -- setup/acl.sh@12 -- # declare -a devs 00:03:27.560 19:58:25 -- setup/acl.sh@13 -- # drivers=() 00:03:27.560 19:58:25 -- setup/acl.sh@13 -- # declare -A drivers 00:03:27.560 19:58:25 -- setup/acl.sh@51 -- # setup reset 00:03:27.560 19:58:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.560 19:58:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.751 19:58:28 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:31.751 19:58:28 -- setup/acl.sh@16 -- # local dev driver 00:03:31.751 19:58:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.751 19:58:28 -- setup/acl.sh@15 -- # setup output status 00:03:31.751 19:58:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.751 19:58:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:03:34.295 Hugepages 00:03:34.295 node hugesize free / total 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 00:03:34.295 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:31 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:31 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:31 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:34.295 19:58:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:34.295 19:58:32 -- setup/acl.sh@20 -- # continue 00:03:34.295 19:58:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:34.295 19:58:32 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:34.295 19:58:32 -- setup/acl.sh@54 -- # run_test denied denied 00:03:34.295 19:58:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.295 19:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.295 19:58:32 -- common/autotest_common.sh@10 -- # set +x 00:03:34.295 ************************************ 00:03:34.295 START TEST denied 00:03:34.295 ************************************ 00:03:34.295 19:58:32 -- common/autotest_common.sh@1104 -- # denied 00:03:34.295 19:58:32 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:34.295 19:58:32 -- setup/acl.sh@38 -- # setup output config 00:03:34.295 19:58:32 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:34.295 19:58:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.295 19:58:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:03:38.489 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:38.489 19:58:35 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:38.489 19:58:35 -- setup/acl.sh@28 -- # local dev driver 00:03:38.489 19:58:35 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:38.489 19:58:35 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:38.489 19:58:35 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:38.489 19:58:35 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:38.489 19:58:35 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:38.489 19:58:35 -- setup/acl.sh@41 -- # setup reset 00:03:38.489 19:58:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.489 19:58:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.681 00:03:42.681 real 0m8.264s 00:03:42.681 user 0m2.608s 00:03:42.681 sys 0m4.933s 00:03:42.681 19:58:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.681 19:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:42.681 ************************************ 00:03:42.681 END TEST denied 00:03:42.681 ************************************ 00:03:42.681 19:58:40 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:42.681 19:58:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.681 19:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.681 19:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:42.681 ************************************ 00:03:42.681 START TEST allowed 00:03:42.681 ************************************ 00:03:42.681 19:58:40 -- common/autotest_common.sh@1104 -- # allowed 00:03:42.682 19:58:40 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:42.682 19:58:40 -- setup/acl.sh@45 -- # setup output config 00:03:42.682 19:58:40 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:42.682 19:58:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.682 19:58:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:03:49.251 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.251 19:58:47 -- setup/acl.sh@47 -- # verify 00:03:49.251 19:58:47 -- setup/acl.sh@28 -- # local dev driver 00:03:49.251 19:58:47 -- setup/acl.sh@48 -- # setup reset 00:03:49.251 19:58:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.251 19:58:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:03:53.446 00:03:53.446 real 0m10.377s 00:03:53.446 user 0m2.424s 00:03:53.446 sys 0m4.843s 00:03:53.446 19:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.446 19:58:50 -- common/autotest_common.sh@10 -- # set +x 00:03:53.446 ************************************ 00:03:53.446 END TEST allowed 00:03:53.446 ************************************ 00:03:53.446 00:03:53.446 real 0m25.713s 00:03:53.446 user 0m7.537s 00:03:53.446 sys 0m14.607s 00:03:53.446 19:58:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.446 19:58:50 -- common/autotest_common.sh@10 -- # set +x 00:03:53.446 ************************************ 00:03:53.446 END TEST acl 00:03:53.446 ************************************ 00:03:53.446 19:58:50 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.446 19:58:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.446 19:58:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.446 19:58:50 -- common/autotest_common.sh@10 -- # set +x 00:03:53.446 ************************************ 00:03:53.446 START TEST hugepages 00:03:53.446 ************************************ 00:03:53.446 19:58:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh 00:03:53.446 * Looking for test storage... 00:03:53.446 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup 00:03:53.446 19:58:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:53.446 19:58:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:53.446 19:58:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:53.446 19:58:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:53.446 19:58:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:53.446 19:58:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:53.446 19:58:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:53.446 19:58:51 -- setup/common.sh@18 -- # local node= 00:03:53.446 19:58:51 -- setup/common.sh@19 -- # local var val 00:03:53.446 19:58:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:53.446 19:58:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.446 19:58:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.446 19:58:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.446 19:58:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.446 19:58:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 72454712 kB' 'MemAvailable: 76217508 kB' 'Buffers: 12472 kB' 'Cached: 13892752 kB' 'SwapCached: 0 kB' 'Active: 10628648 kB' 'Inactive: 3702076 kB' 'Active(anon): 10017672 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 429080 kB' 'Mapped: 175944 kB' 'Shmem: 9592172 kB' 'KReclaimable: 206324 kB' 'Slab: 500432 kB' 'SReclaimable: 206324 kB' 'SUnreclaim: 294108 kB' 'KernelStack: 16000 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438196 kB' 'Committed_AS: 11302220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198568 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.446 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.446 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # continue 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:53.447 19:58:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:53.447 19:58:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:53.447 19:58:51 -- setup/common.sh@33 -- # echo 2048 00:03:53.447 19:58:51 -- setup/common.sh@33 -- # return 0 00:03:53.447 19:58:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:53.447 19:58:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:53.447 19:58:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:53.447 19:58:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:53.447 19:58:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:53.447 19:58:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:53.447 19:58:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:53.447 19:58:51 -- setup/hugepages.sh@207 -- # get_nodes 00:03:53.447 19:58:51 -- setup/hugepages.sh@27 -- # local node 00:03:53.447 19:58:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.447 19:58:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:53.447 19:58:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.447 19:58:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:53.447 19:58:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:53.447 19:58:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.447 19:58:51 -- setup/hugepages.sh@208 -- # clear_hp 00:03:53.447 19:58:51 -- setup/hugepages.sh@37 -- # local node hp 00:03:53.447 19:58:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.447 19:58:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.447 19:58:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:53.447 19:58:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.447 19:58:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:53.447 19:58:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:53.447 19:58:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.447 19:58:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:53.447 19:58:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:53.447 19:58:51 -- setup/hugepages.sh@41 -- # echo 0 00:03:53.447 19:58:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:53.448 19:58:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:53.448 19:58:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:53.448 19:58:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.448 19:58:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.448 19:58:51 -- common/autotest_common.sh@10 -- # set +x 00:03:53.448 ************************************ 00:03:53.448 START TEST default_setup 00:03:53.448 ************************************ 00:03:53.448 19:58:51 -- common/autotest_common.sh@1104 -- # default_setup 00:03:53.448 19:58:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:53.448 19:58:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.448 19:58:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:53.448 19:58:51 -- setup/hugepages.sh@51 -- # shift 00:03:53.448 19:58:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:53.448 19:58:51 -- setup/hugepages.sh@52 -- # local node_ids 00:03:53.448 19:58:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.448 19:58:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.448 19:58:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:53.448 19:58:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:53.448 19:58:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.448 19:58:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.448 19:58:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:53.448 19:58:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.448 19:58:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.448 19:58:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:53.448 19:58:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:53.448 19:58:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:53.448 19:58:51 -- setup/hugepages.sh@73 -- # return 0 00:03:53.448 19:58:51 -- setup/hugepages.sh@137 -- # setup output 00:03:53.448 19:58:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.448 19:58:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:03:56.758 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:56.758 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:00.054 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:00.054 19:58:57 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:00.054 19:58:57 -- setup/hugepages.sh@89 -- # local node 00:04:00.054 19:58:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.054 19:58:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.054 19:58:57 -- setup/hugepages.sh@92 -- # local surp 00:04:00.054 19:58:57 -- setup/hugepages.sh@93 -- # local resv 00:04:00.054 19:58:57 -- setup/hugepages.sh@94 -- # local anon 00:04:00.054 19:58:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.054 19:58:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.054 19:58:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.054 19:58:57 -- setup/common.sh@18 -- # local node= 00:04:00.054 19:58:57 -- setup/common.sh@19 -- # local var val 00:04:00.054 19:58:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.054 19:58:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.055 19:58:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.055 19:58:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.055 19:58:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.055 19:58:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74623380 kB' 'MemAvailable: 78386092 kB' 'Buffers: 12472 kB' 'Cached: 13892864 kB' 'SwapCached: 0 kB' 'Active: 10646368 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035392 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446408 kB' 'Mapped: 175964 kB' 'Shmem: 9592284 kB' 'KReclaimable: 206156 kB' 'Slab: 499136 kB' 'SReclaimable: 206156 kB' 'SUnreclaim: 292980 kB' 'KernelStack: 16208 kB' 'PageTables: 8256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11319064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198584 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.055 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.055 19:58:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.056 19:58:57 -- setup/common.sh@33 -- # echo 0 00:04:00.056 19:58:57 -- setup/common.sh@33 -- # return 0 00:04:00.056 19:58:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:00.056 19:58:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.056 19:58:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.056 19:58:57 -- setup/common.sh@18 -- # local node= 00:04:00.056 19:58:57 -- setup/common.sh@19 -- # local var val 00:04:00.056 19:58:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.056 19:58:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.056 19:58:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.056 19:58:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.056 19:58:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.056 19:58:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74623384 kB' 'MemAvailable: 78386096 kB' 'Buffers: 12472 kB' 'Cached: 13892864 kB' 'SwapCached: 0 kB' 'Active: 10646652 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035676 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446604 kB' 'Mapped: 175964 kB' 'Shmem: 9592284 kB' 'KReclaimable: 206156 kB' 'Slab: 499088 kB' 'SReclaimable: 206156 kB' 'SUnreclaim: 292932 kB' 'KernelStack: 16208 kB' 'PageTables: 8548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11320468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198552 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.056 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.056 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.057 19:58:57 -- setup/common.sh@33 -- # echo 0 00:04:00.057 19:58:57 -- setup/common.sh@33 -- # return 0 00:04:00.057 19:58:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:00.057 19:58:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.057 19:58:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.057 19:58:57 -- setup/common.sh@18 -- # local node= 00:04:00.057 19:58:57 -- setup/common.sh@19 -- # local var val 00:04:00.057 19:58:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.057 19:58:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.057 19:58:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.057 19:58:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.057 19:58:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.057 19:58:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74622188 kB' 'MemAvailable: 78384900 kB' 'Buffers: 12472 kB' 'Cached: 13892868 kB' 'SwapCached: 0 kB' 'Active: 10646612 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035636 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446060 kB' 'Mapped: 175896 kB' 'Shmem: 9592288 kB' 'KReclaimable: 206156 kB' 'Slab: 499016 kB' 'SReclaimable: 206156 kB' 'SUnreclaim: 292860 kB' 'KernelStack: 16256 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11320480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198648 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.057 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.057 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.058 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.058 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.059 19:58:57 -- setup/common.sh@33 -- # echo 0 00:04:00.059 19:58:57 -- setup/common.sh@33 -- # return 0 00:04:00.059 19:58:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:00.059 19:58:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.059 nr_hugepages=1024 00:04:00.059 19:58:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.059 resv_hugepages=0 00:04:00.059 19:58:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.059 surplus_hugepages=0 00:04:00.059 19:58:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.059 anon_hugepages=0 00:04:00.059 19:58:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.059 19:58:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.059 19:58:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.059 19:58:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.059 19:58:57 -- setup/common.sh@18 -- # local node= 00:04:00.059 19:58:57 -- setup/common.sh@19 -- # local var val 00:04:00.059 19:58:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.059 19:58:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.059 19:58:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.059 19:58:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.059 19:58:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.059 19:58:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74620876 kB' 'MemAvailable: 78383588 kB' 'Buffers: 12472 kB' 'Cached: 13892868 kB' 'SwapCached: 0 kB' 'Active: 10647776 kB' 'Inactive: 3702076 kB' 'Active(anon): 10036800 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 447724 kB' 'Mapped: 176392 kB' 'Shmem: 9592288 kB' 'KReclaimable: 206156 kB' 'Slab: 498984 kB' 'SReclaimable: 206156 kB' 'SUnreclaim: 292828 kB' 'KernelStack: 16224 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11321984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198616 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.059 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.059 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.060 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.060 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.060 19:58:57 -- setup/common.sh@33 -- # echo 1024 00:04:00.060 19:58:57 -- setup/common.sh@33 -- # return 0 00:04:00.060 19:58:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.060 19:58:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.060 19:58:57 -- setup/hugepages.sh@27 -- # local node 00:04:00.060 19:58:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.060 19:58:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:00.060 19:58:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.060 19:58:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:00.060 19:58:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.060 19:58:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.060 19:58:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.060 19:58:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.060 19:58:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.060 19:58:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.060 19:58:57 -- setup/common.sh@18 -- # local node=0 00:04:00.060 19:58:57 -- setup/common.sh@19 -- # local var val 00:04:00.060 19:58:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:00.061 19:58:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.061 19:58:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.061 19:58:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.061 19:58:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.061 19:58:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.061 19:58:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 38907340 kB' 'MemUsed: 9162548 kB' 'SwapCached: 0 kB' 'Active: 6175588 kB' 'Inactive: 263784 kB' 'Active(anon): 5695792 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027364 kB' 'Mapped: 175096 kB' 'AnonPages: 415160 kB' 'Shmem: 5283784 kB' 'KernelStack: 8840 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114948 kB' 'Slab: 265432 kB' 'SReclaimable: 114948 kB' 'SUnreclaim: 150484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.061 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.061 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.062 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.062 19:58:57 -- setup/common.sh@32 -- # continue 00:04:00.062 19:58:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:00.062 19:58:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:00.062 19:58:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.062 19:58:57 -- setup/common.sh@33 -- # echo 0 00:04:00.062 19:58:57 -- setup/common.sh@33 -- # return 0 00:04:00.062 19:58:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.062 19:58:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.062 19:58:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.062 19:58:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.062 19:58:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:00.062 node0=1024 expecting 1024 00:04:00.062 19:58:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:00.062 00:04:00.062 real 0m6.688s 00:04:00.062 user 0m1.373s 00:04:00.062 sys 0m2.267s 00:04:00.062 19:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.062 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:04:00.062 ************************************ 00:04:00.062 END TEST default_setup 00:04:00.062 ************************************ 00:04:00.062 19:58:57 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:00.062 19:58:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.062 19:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.062 19:58:57 -- common/autotest_common.sh@10 -- # set +x 00:04:00.062 ************************************ 00:04:00.062 START TEST per_node_1G_alloc 00:04:00.062 ************************************ 00:04:00.062 19:58:57 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:00.062 19:58:57 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:00.062 19:58:57 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:00.062 19:58:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:00.062 19:58:57 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:00.062 19:58:57 -- setup/hugepages.sh@51 -- # shift 00:04:00.062 19:58:57 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:00.062 19:58:57 -- setup/hugepages.sh@52 -- # local node_ids 00:04:00.062 19:58:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.062 19:58:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:00.062 19:58:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:00.062 19:58:57 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:00.062 19:58:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.062 19:58:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:00.062 19:58:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.062 19:58:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.062 19:58:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.062 19:58:57 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:00.062 19:58:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.062 19:58:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.062 19:58:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:00.062 19:58:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:00.062 19:58:57 -- setup/hugepages.sh@73 -- # return 0 00:04:00.062 19:58:57 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:00.062 19:58:57 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:00.062 19:58:57 -- setup/hugepages.sh@146 -- # setup output 00:04:00.062 19:58:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.062 19:58:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:03.405 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:03.405 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:03.405 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:03.405 19:59:01 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:03.405 19:59:01 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:03.405 19:59:01 -- setup/hugepages.sh@89 -- # local node 00:04:03.405 19:59:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.405 19:59:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.405 19:59:01 -- setup/hugepages.sh@92 -- # local surp 00:04:03.405 19:59:01 -- setup/hugepages.sh@93 -- # local resv 00:04:03.405 19:59:01 -- setup/hugepages.sh@94 -- # local anon 00:04:03.405 19:59:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.405 19:59:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.405 19:59:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.405 19:59:01 -- setup/common.sh@18 -- # local node= 00:04:03.405 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.405 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.405 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.405 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.405 19:59:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.405 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.405 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.405 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.405 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74593772 kB' 'MemAvailable: 78356420 kB' 'Buffers: 12472 kB' 'Cached: 13892976 kB' 'SwapCached: 0 kB' 'Active: 10646192 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035216 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445652 kB' 'Mapped: 175940 kB' 'Shmem: 9592396 kB' 'KReclaimable: 206028 kB' 'Slab: 498968 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292940 kB' 'KernelStack: 16048 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11316644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.406 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.406 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.407 19:59:01 -- setup/common.sh@33 -- # echo 0 00:04:03.407 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.407 19:59:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.407 19:59:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.407 19:59:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.407 19:59:01 -- setup/common.sh@18 -- # local node= 00:04:03.407 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.407 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.407 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.407 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.407 19:59:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.407 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.407 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74594316 kB' 'MemAvailable: 78356964 kB' 'Buffers: 12472 kB' 'Cached: 13892980 kB' 'SwapCached: 0 kB' 'Active: 10644384 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033408 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444268 kB' 'Mapped: 175356 kB' 'Shmem: 9592400 kB' 'KReclaimable: 206028 kB' 'Slab: 498980 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292952 kB' 'KernelStack: 16048 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11309564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198680 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.407 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.407 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.408 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.408 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.408 19:59:01 -- setup/common.sh@33 -- # echo 0 00:04:03.408 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.408 19:59:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.408 19:59:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.408 19:59:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.408 19:59:01 -- setup/common.sh@18 -- # local node= 00:04:03.408 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.408 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.408 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.408 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.408 19:59:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.408 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.408 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.409 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74593904 kB' 'MemAvailable: 78356552 kB' 'Buffers: 12472 kB' 'Cached: 13892992 kB' 'SwapCached: 0 kB' 'Active: 10643564 kB' 'Inactive: 3702076 kB' 'Active(anon): 10032588 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443428 kB' 'Mapped: 175092 kB' 'Shmem: 9592412 kB' 'KReclaimable: 206028 kB' 'Slab: 498940 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292912 kB' 'KernelStack: 15984 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11309580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198632 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.409 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.409 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.410 19:59:01 -- setup/common.sh@33 -- # echo 0 00:04:03.410 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.410 19:59:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.410 19:59:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.410 nr_hugepages=1024 00:04:03.410 19:59:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.410 resv_hugepages=0 00:04:03.410 19:59:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.410 surplus_hugepages=0 00:04:03.410 19:59:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.410 anon_hugepages=0 00:04:03.410 19:59:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.410 19:59:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.410 19:59:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.410 19:59:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.410 19:59:01 -- setup/common.sh@18 -- # local node= 00:04:03.410 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.410 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.410 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.410 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.410 19:59:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.410 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.410 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74593904 kB' 'MemAvailable: 78356552 kB' 'Buffers: 12472 kB' 'Cached: 13893016 kB' 'SwapCached: 0 kB' 'Active: 10643312 kB' 'Inactive: 3702076 kB' 'Active(anon): 10032336 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443164 kB' 'Mapped: 175092 kB' 'Shmem: 9592436 kB' 'KReclaimable: 206028 kB' 'Slab: 498940 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292912 kB' 'KernelStack: 15984 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11309592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198632 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.410 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.410 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.412 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.412 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.413 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.413 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.414 19:59:01 -- setup/common.sh@33 -- # echo 1024 00:04:03.414 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.414 19:59:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.414 19:59:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.414 19:59:01 -- setup/hugepages.sh@27 -- # local node 00:04:03.414 19:59:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.414 19:59:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.414 19:59:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.414 19:59:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.414 19:59:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.414 19:59:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.414 19:59:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.414 19:59:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.414 19:59:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.414 19:59:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.414 19:59:01 -- setup/common.sh@18 -- # local node=0 00:04:03.414 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.414 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.414 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.414 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.414 19:59:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.414 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.414 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 39946876 kB' 'MemUsed: 8123012 kB' 'SwapCached: 0 kB' 'Active: 6170512 kB' 'Inactive: 263784 kB' 'Active(anon): 5690716 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027480 kB' 'Mapped: 174588 kB' 'AnonPages: 410084 kB' 'Shmem: 5283900 kB' 'KernelStack: 8936 kB' 'PageTables: 5068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114820 kB' 'Slab: 265432 kB' 'SReclaimable: 114820 kB' 'SUnreclaim: 150612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.414 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.414 19:59:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.676 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.676 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@33 -- # echo 0 00:04:03.677 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.677 19:59:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.677 19:59:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.677 19:59:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.677 19:59:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.677 19:59:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.677 19:59:01 -- setup/common.sh@18 -- # local node=1 00:04:03.677 19:59:01 -- setup/common.sh@19 -- # local var val 00:04:03.677 19:59:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.677 19:59:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.677 19:59:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.677 19:59:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.677 19:59:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.677 19:59:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223604 kB' 'MemFree: 34647628 kB' 'MemUsed: 9575976 kB' 'SwapCached: 0 kB' 'Active: 4473180 kB' 'Inactive: 3438292 kB' 'Active(anon): 4342000 kB' 'Inactive(anon): 0 kB' 'Active(file): 131180 kB' 'Inactive(file): 3438292 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7878012 kB' 'Mapped: 504 kB' 'AnonPages: 33468 kB' 'Shmem: 4308540 kB' 'KernelStack: 7064 kB' 'PageTables: 2536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91208 kB' 'Slab: 233508 kB' 'SReclaimable: 91208 kB' 'SUnreclaim: 142300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.677 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.677 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # continue 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.678 19:59:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.678 19:59:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.678 19:59:01 -- setup/common.sh@33 -- # echo 0 00:04:03.678 19:59:01 -- setup/common.sh@33 -- # return 0 00:04:03.678 19:59:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.678 19:59:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.678 19:59:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.678 19:59:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.678 node0=512 expecting 512 00:04:03.678 19:59:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.678 19:59:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.678 19:59:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.678 19:59:01 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:03.678 node1=512 expecting 512 00:04:03.678 19:59:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.678 00:04:03.678 real 0m3.527s 00:04:03.678 user 0m1.345s 00:04:03.678 sys 0m2.275s 00:04:03.678 19:59:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.678 19:59:01 -- common/autotest_common.sh@10 -- # set +x 00:04:03.678 ************************************ 00:04:03.678 END TEST per_node_1G_alloc 00:04:03.678 ************************************ 00:04:03.678 19:59:01 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.678 19:59:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.678 19:59:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.678 19:59:01 -- common/autotest_common.sh@10 -- # set +x 00:04:03.678 ************************************ 00:04:03.678 START TEST even_2G_alloc 00:04:03.678 ************************************ 00:04:03.678 19:59:01 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:03.678 19:59:01 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.678 19:59:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.678 19:59:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.678 19:59:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.678 19:59:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.678 19:59:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.678 19:59:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.678 19:59:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.678 19:59:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.678 19:59:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.678 19:59:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.678 19:59:01 -- setup/hugepages.sh@83 -- # : 512 00:04:03.678 19:59:01 -- setup/hugepages.sh@84 -- # : 1 00:04:03.678 19:59:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:03.678 19:59:01 -- setup/hugepages.sh@83 -- # : 0 00:04:03.678 19:59:01 -- setup/hugepages.sh@84 -- # : 0 00:04:03.678 19:59:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.678 19:59:01 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.678 19:59:01 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.678 19:59:01 -- setup/hugepages.sh@153 -- # setup output 00:04:03.678 19:59:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.678 19:59:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:06.975 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:06.975 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:06.975 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:06.975 19:59:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:06.975 19:59:04 -- setup/hugepages.sh@89 -- # local node 00:04:06.975 19:59:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.975 19:59:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.975 19:59:04 -- setup/hugepages.sh@92 -- # local surp 00:04:06.975 19:59:04 -- setup/hugepages.sh@93 -- # local resv 00:04:06.975 19:59:04 -- setup/hugepages.sh@94 -- # local anon 00:04:06.975 19:59:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.975 19:59:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.975 19:59:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.975 19:59:04 -- setup/common.sh@18 -- # local node= 00:04:06.975 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:06.975 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.975 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.975 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.975 19:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.975 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.975 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74592668 kB' 'MemAvailable: 78355316 kB' 'Buffers: 12472 kB' 'Cached: 13893088 kB' 'SwapCached: 0 kB' 'Active: 10644432 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033456 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444144 kB' 'Mapped: 175116 kB' 'Shmem: 9592508 kB' 'KReclaimable: 206028 kB' 'Slab: 498624 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292596 kB' 'KernelStack: 16000 kB' 'PageTables: 7584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11310056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.975 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.975 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.976 19:59:04 -- setup/common.sh@33 -- # echo 0 00:04:06.976 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:06.976 19:59:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.976 19:59:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.976 19:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.976 19:59:04 -- setup/common.sh@18 -- # local node= 00:04:06.976 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:06.976 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.976 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.976 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.976 19:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.976 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.976 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74595040 kB' 'MemAvailable: 78357688 kB' 'Buffers: 12472 kB' 'Cached: 13893092 kB' 'SwapCached: 0 kB' 'Active: 10644156 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033180 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443884 kB' 'Mapped: 175104 kB' 'Shmem: 9592512 kB' 'KReclaimable: 206028 kB' 'Slab: 498660 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292632 kB' 'KernelStack: 15984 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11310068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.976 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.976 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.977 19:59:04 -- setup/common.sh@33 -- # echo 0 00:04:06.977 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:06.977 19:59:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.977 19:59:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.977 19:59:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.977 19:59:04 -- setup/common.sh@18 -- # local node= 00:04:06.977 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:06.977 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.977 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.977 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.977 19:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.977 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.977 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74594044 kB' 'MemAvailable: 78356692 kB' 'Buffers: 12472 kB' 'Cached: 13893104 kB' 'SwapCached: 0 kB' 'Active: 10644188 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033212 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 443920 kB' 'Mapped: 175104 kB' 'Shmem: 9592524 kB' 'KReclaimable: 206028 kB' 'Slab: 498660 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292632 kB' 'KernelStack: 16000 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11310084 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.977 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.977 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.978 19:59:04 -- setup/common.sh@33 -- # echo 0 00:04:06.978 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:06.978 19:59:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.978 19:59:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.978 nr_hugepages=1024 00:04:06.978 19:59:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.978 resv_hugepages=0 00:04:06.978 19:59:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.978 surplus_hugepages=0 00:04:06.978 19:59:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.978 anon_hugepages=0 00:04:06.978 19:59:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.978 19:59:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.978 19:59:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.978 19:59:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.978 19:59:04 -- setup/common.sh@18 -- # local node= 00:04:06.978 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:06.978 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.978 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.978 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.978 19:59:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.978 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.978 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74594044 kB' 'MemAvailable: 78356692 kB' 'Buffers: 12472 kB' 'Cached: 13893116 kB' 'SwapCached: 0 kB' 'Active: 10644856 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033880 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444584 kB' 'Mapped: 175104 kB' 'Shmem: 9592536 kB' 'KReclaimable: 206028 kB' 'Slab: 498660 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292632 kB' 'KernelStack: 16032 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11321032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.978 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.978 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.979 19:59:04 -- setup/common.sh@32 -- # continue 00:04:06.979 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.247 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.247 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.247 19:59:04 -- setup/common.sh@33 -- # echo 1024 00:04:07.247 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:07.247 19:59:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.247 19:59:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.247 19:59:04 -- setup/hugepages.sh@27 -- # local node 00:04:07.247 19:59:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.247 19:59:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.247 19:59:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.247 19:59:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:07.247 19:59:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.247 19:59:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.247 19:59:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.247 19:59:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.247 19:59:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.247 19:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.247 19:59:04 -- setup/common.sh@18 -- # local node=0 00:04:07.247 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:07.247 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.247 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.247 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.248 19:59:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.248 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.248 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 39961412 kB' 'MemUsed: 8108476 kB' 'SwapCached: 0 kB' 'Active: 6170504 kB' 'Inactive: 263784 kB' 'Active(anon): 5690708 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027596 kB' 'Mapped: 174596 kB' 'AnonPages: 409832 kB' 'Shmem: 5284016 kB' 'KernelStack: 8872 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114820 kB' 'Slab: 265464 kB' 'SReclaimable: 114820 kB' 'SUnreclaim: 150644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.248 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.248 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@33 -- # echo 0 00:04:07.249 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:07.249 19:59:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.249 19:59:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.249 19:59:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.249 19:59:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:07.249 19:59:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.249 19:59:04 -- setup/common.sh@18 -- # local node=1 00:04:07.249 19:59:04 -- setup/common.sh@19 -- # local var val 00:04:07.249 19:59:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.249 19:59:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.249 19:59:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:07.249 19:59:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:07.249 19:59:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.249 19:59:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223604 kB' 'MemFree: 34644548 kB' 'MemUsed: 9579056 kB' 'SwapCached: 0 kB' 'Active: 4473080 kB' 'Inactive: 3438292 kB' 'Active(anon): 4341900 kB' 'Inactive(anon): 0 kB' 'Active(file): 131180 kB' 'Inactive(file): 3438292 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7878036 kB' 'Mapped: 508 kB' 'AnonPages: 33360 kB' 'Shmem: 4308564 kB' 'KernelStack: 7080 kB' 'PageTables: 2540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91208 kB' 'Slab: 233168 kB' 'SReclaimable: 91208 kB' 'SUnreclaim: 141960 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.249 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.249 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # continue 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.250 19:59:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.250 19:59:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.250 19:59:04 -- setup/common.sh@33 -- # echo 0 00:04:07.250 19:59:04 -- setup/common.sh@33 -- # return 0 00:04:07.250 19:59:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.250 19:59:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.250 19:59:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.250 19:59:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.250 19:59:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:07.250 node0=512 expecting 512 00:04:07.250 19:59:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.250 19:59:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.250 19:59:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.250 19:59:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:07.250 node1=512 expecting 512 00:04:07.250 19:59:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:07.250 00:04:07.250 real 0m3.543s 00:04:07.250 user 0m1.340s 00:04:07.250 sys 0m2.297s 00:04:07.250 19:59:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.250 19:59:04 -- common/autotest_common.sh@10 -- # set +x 00:04:07.250 ************************************ 00:04:07.250 END TEST even_2G_alloc 00:04:07.250 ************************************ 00:04:07.250 19:59:05 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:07.250 19:59:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:07.250 19:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.250 19:59:05 -- common/autotest_common.sh@10 -- # set +x 00:04:07.250 ************************************ 00:04:07.250 START TEST odd_alloc 00:04:07.250 ************************************ 00:04:07.250 19:59:05 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:07.250 19:59:05 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:07.250 19:59:05 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:07.250 19:59:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:07.250 19:59:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:07.250 19:59:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:07.250 19:59:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.250 19:59:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:07.250 19:59:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:07.250 19:59:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.250 19:59:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.250 19:59:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:07.250 19:59:05 -- setup/hugepages.sh@83 -- # : 513 00:04:07.250 19:59:05 -- setup/hugepages.sh@84 -- # : 1 00:04:07.250 19:59:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:07.250 19:59:05 -- setup/hugepages.sh@83 -- # : 0 00:04:07.250 19:59:05 -- setup/hugepages.sh@84 -- # : 0 00:04:07.250 19:59:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:07.250 19:59:05 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:07.250 19:59:05 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:07.250 19:59:05 -- setup/hugepages.sh@160 -- # setup output 00:04:07.250 19:59:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.250 19:59:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:10.545 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:10.545 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:10.545 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:10.545 19:59:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.545 19:59:08 -- setup/hugepages.sh@89 -- # local node 00:04:10.545 19:59:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.545 19:59:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.545 19:59:08 -- setup/hugepages.sh@92 -- # local surp 00:04:10.545 19:59:08 -- setup/hugepages.sh@93 -- # local resv 00:04:10.545 19:59:08 -- setup/hugepages.sh@94 -- # local anon 00:04:10.545 19:59:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.545 19:59:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.545 19:59:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.545 19:59:08 -- setup/common.sh@18 -- # local node= 00:04:10.545 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.545 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.545 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.545 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.545 19:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.545 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.545 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74598372 kB' 'MemAvailable: 78361020 kB' 'Buffers: 12472 kB' 'Cached: 13893196 kB' 'SwapCached: 0 kB' 'Active: 10646036 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035060 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445192 kB' 'Mapped: 175224 kB' 'Shmem: 9592616 kB' 'KReclaimable: 206028 kB' 'Slab: 498804 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292776 kB' 'KernelStack: 16000 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485748 kB' 'Committed_AS: 11310424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.545 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.545 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.546 19:59:08 -- setup/common.sh@33 -- # echo 0 00:04:10.546 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.546 19:59:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.546 19:59:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.546 19:59:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.546 19:59:08 -- setup/common.sh@18 -- # local node= 00:04:10.546 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.546 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.546 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.546 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.546 19:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.546 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.546 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74602476 kB' 'MemAvailable: 78365124 kB' 'Buffers: 12472 kB' 'Cached: 13893200 kB' 'SwapCached: 0 kB' 'Active: 10645348 kB' 'Inactive: 3702076 kB' 'Active(anon): 10034372 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444948 kB' 'Mapped: 175104 kB' 'Shmem: 9592620 kB' 'KReclaimable: 206028 kB' 'Slab: 498800 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292772 kB' 'KernelStack: 16000 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485748 kB' 'Committed_AS: 11310436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198648 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.546 19:59:08 -- setup/common.sh@33 -- # echo 0 00:04:10.546 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.546 19:59:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.546 19:59:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.546 19:59:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.546 19:59:08 -- setup/common.sh@18 -- # local node= 00:04:10.546 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.546 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.546 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.546 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.546 19:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.546 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.546 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.546 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74602728 kB' 'MemAvailable: 78365376 kB' 'Buffers: 12472 kB' 'Cached: 13893212 kB' 'SwapCached: 0 kB' 'Active: 10645360 kB' 'Inactive: 3702076 kB' 'Active(anon): 10034384 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444944 kB' 'Mapped: 175104 kB' 'Shmem: 9592632 kB' 'KReclaimable: 206028 kB' 'Slab: 498800 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292772 kB' 'KernelStack: 16000 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485748 kB' 'Committed_AS: 11310132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198664 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.546 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.546 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.547 19:59:08 -- setup/common.sh@33 -- # echo 0 00:04:10.547 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.547 19:59:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.547 19:59:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.547 nr_hugepages=1025 00:04:10.547 19:59:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.547 resv_hugepages=0 00:04:10.547 19:59:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.547 surplus_hugepages=0 00:04:10.547 19:59:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.547 anon_hugepages=0 00:04:10.547 19:59:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.547 19:59:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.547 19:59:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.547 19:59:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.547 19:59:08 -- setup/common.sh@18 -- # local node= 00:04:10.547 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.547 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.547 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.547 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.547 19:59:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.547 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.547 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74603224 kB' 'MemAvailable: 78365872 kB' 'Buffers: 12472 kB' 'Cached: 13893232 kB' 'SwapCached: 0 kB' 'Active: 10645784 kB' 'Inactive: 3702076 kB' 'Active(anon): 10034808 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445528 kB' 'Mapped: 175104 kB' 'Shmem: 9592652 kB' 'KReclaimable: 206028 kB' 'Slab: 498808 kB' 'SReclaimable: 206028 kB' 'SUnreclaim: 292780 kB' 'KernelStack: 16032 kB' 'PageTables: 7692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485748 kB' 'Committed_AS: 11314212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198632 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.547 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.547 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.548 19:59:08 -- setup/common.sh@33 -- # echo 1025 00:04:10.548 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.548 19:59:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.548 19:59:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.548 19:59:08 -- setup/hugepages.sh@27 -- # local node 00:04:10.548 19:59:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.548 19:59:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:10.548 19:59:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.548 19:59:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:10.548 19:59:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.548 19:59:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.548 19:59:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.548 19:59:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.548 19:59:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.548 19:59:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.548 19:59:08 -- setup/common.sh@18 -- # local node=0 00:04:10.548 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.548 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.548 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.548 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.548 19:59:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.548 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.548 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 39956332 kB' 'MemUsed: 8113556 kB' 'SwapCached: 0 kB' 'Active: 6172132 kB' 'Inactive: 263784 kB' 'Active(anon): 5692336 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027604 kB' 'Mapped: 174596 kB' 'AnonPages: 411580 kB' 'Shmem: 5284024 kB' 'KernelStack: 8920 kB' 'PageTables: 5076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114820 kB' 'Slab: 265592 kB' 'SReclaimable: 114820 kB' 'SUnreclaim: 150772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.548 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.548 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.548 19:59:08 -- setup/common.sh@33 -- # echo 0 00:04:10.548 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.548 19:59:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.548 19:59:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.548 19:59:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.548 19:59:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:10.548 19:59:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.548 19:59:08 -- setup/common.sh@18 -- # local node=1 00:04:10.548 19:59:08 -- setup/common.sh@19 -- # local var val 00:04:10.549 19:59:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.549 19:59:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.549 19:59:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:10.549 19:59:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:10.549 19:59:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.549 19:59:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.549 19:59:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223604 kB' 'MemFree: 34648128 kB' 'MemUsed: 9575476 kB' 'SwapCached: 0 kB' 'Active: 4473940 kB' 'Inactive: 3438292 kB' 'Active(anon): 4342760 kB' 'Inactive(anon): 0 kB' 'Active(file): 131180 kB' 'Inactive(file): 3438292 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7878124 kB' 'Mapped: 516 kB' 'AnonPages: 34188 kB' 'Shmem: 4308652 kB' 'KernelStack: 7080 kB' 'PageTables: 2536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91208 kB' 'Slab: 233200 kB' 'SReclaimable: 91208 kB' 'SUnreclaim: 141992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.549 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.549 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.809 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.809 19:59:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # continue 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.810 19:59:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.810 19:59:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.810 19:59:08 -- setup/common.sh@33 -- # echo 0 00:04:10.810 19:59:08 -- setup/common.sh@33 -- # return 0 00:04:10.810 19:59:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.810 19:59:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.810 19:59:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:10.810 node0=512 expecting 513 00:04:10.810 19:59:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.810 19:59:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.810 19:59:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:10.810 node1=513 expecting 512 00:04:10.810 19:59:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:10.810 00:04:10.810 real 0m3.465s 00:04:10.810 user 0m1.284s 00:04:10.810 sys 0m2.269s 00:04:10.810 19:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.810 19:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.810 ************************************ 00:04:10.810 END TEST odd_alloc 00:04:10.810 ************************************ 00:04:10.810 19:59:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.810 19:59:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.810 19:59:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.810 19:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:10.810 ************************************ 00:04:10.810 START TEST custom_alloc 00:04:10.810 ************************************ 00:04:10.810 19:59:08 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:10.810 19:59:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.810 19:59:08 -- setup/hugepages.sh@169 -- # local node 00:04:10.810 19:59:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.810 19:59:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.810 19:59:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.810 19:59:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.810 19:59:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.810 19:59:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.810 19:59:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.810 19:59:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.810 19:59:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.810 19:59:08 -- setup/hugepages.sh@83 -- # : 256 00:04:10.810 19:59:08 -- setup/hugepages.sh@84 -- # : 1 00:04:10.810 19:59:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:10.810 19:59:08 -- setup/hugepages.sh@83 -- # : 0 00:04:10.810 19:59:08 -- setup/hugepages.sh@84 -- # : 0 00:04:10.810 19:59:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.810 19:59:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:10.810 19:59:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:10.810 19:59:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:10.810 19:59:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.810 19:59:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.810 19:59:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.810 19:59:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.810 19:59:08 -- setup/hugepages.sh@78 -- # return 0 00:04:10.810 19:59:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:10.810 19:59:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.810 19:59:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.810 19:59:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.810 19:59:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:10.810 19:59:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.810 19:59:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.810 19:59:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:10.810 19:59:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.810 19:59:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.810 19:59:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:10.810 19:59:08 -- setup/hugepages.sh@78 -- # return 0 00:04:10.810 19:59:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:10.810 19:59:08 -- setup/hugepages.sh@187 -- # setup output 00:04:10.810 19:59:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.810 19:59:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:14.104 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:14.104 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:14.104 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:14.104 19:59:11 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:14.104 19:59:11 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:14.104 19:59:11 -- setup/hugepages.sh@89 -- # local node 00:04:14.104 19:59:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:14.104 19:59:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:14.104 19:59:11 -- setup/hugepages.sh@92 -- # local surp 00:04:14.104 19:59:11 -- setup/hugepages.sh@93 -- # local resv 00:04:14.104 19:59:11 -- setup/hugepages.sh@94 -- # local anon 00:04:14.104 19:59:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:14.104 19:59:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:14.104 19:59:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:14.104 19:59:11 -- setup/common.sh@18 -- # local node= 00:04:14.104 19:59:11 -- setup/common.sh@19 -- # local var val 00:04:14.104 19:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.104 19:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.104 19:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.104 19:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.104 19:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.104 19:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 73534624 kB' 'MemAvailable: 77297216 kB' 'Buffers: 12472 kB' 'Cached: 13893320 kB' 'SwapCached: 0 kB' 'Active: 10646764 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035788 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445840 kB' 'Mapped: 175700 kB' 'Shmem: 9592740 kB' 'KReclaimable: 205916 kB' 'Slab: 499132 kB' 'SReclaimable: 205916 kB' 'SUnreclaim: 293216 kB' 'KernelStack: 15984 kB' 'PageTables: 7560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962484 kB' 'Committed_AS: 11312588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.104 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.104 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:14.105 19:59:11 -- setup/common.sh@33 -- # echo 0 00:04:14.105 19:59:11 -- setup/common.sh@33 -- # return 0 00:04:14.105 19:59:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:14.105 19:59:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:14.105 19:59:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.105 19:59:11 -- setup/common.sh@18 -- # local node= 00:04:14.105 19:59:11 -- setup/common.sh@19 -- # local var val 00:04:14.105 19:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.105 19:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.105 19:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.105 19:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.105 19:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.105 19:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 73526496 kB' 'MemAvailable: 77289052 kB' 'Buffers: 12472 kB' 'Cached: 13893328 kB' 'SwapCached: 0 kB' 'Active: 10649852 kB' 'Inactive: 3702076 kB' 'Active(anon): 10038876 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 449344 kB' 'Mapped: 175616 kB' 'Shmem: 9592748 kB' 'KReclaimable: 205844 kB' 'Slab: 499048 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293204 kB' 'KernelStack: 15952 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962484 kB' 'Committed_AS: 11316836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198572 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.105 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.105 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.106 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.106 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.107 19:59:11 -- setup/common.sh@33 -- # echo 0 00:04:14.107 19:59:11 -- setup/common.sh@33 -- # return 0 00:04:14.107 19:59:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:14.107 19:59:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:14.107 19:59:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:14.107 19:59:11 -- setup/common.sh@18 -- # local node= 00:04:14.107 19:59:11 -- setup/common.sh@19 -- # local var val 00:04:14.107 19:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.107 19:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.107 19:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.107 19:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.107 19:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.107 19:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 73526664 kB' 'MemAvailable: 77289220 kB' 'Buffers: 12472 kB' 'Cached: 13893328 kB' 'SwapCached: 0 kB' 'Active: 10644944 kB' 'Inactive: 3702076 kB' 'Active(anon): 10033968 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444432 kB' 'Mapped: 175112 kB' 'Shmem: 9592748 kB' 'KReclaimable: 205844 kB' 'Slab: 499032 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293188 kB' 'KernelStack: 15968 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962484 kB' 'Committed_AS: 11310732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198584 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.107 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.107 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:14.108 19:59:11 -- setup/common.sh@33 -- # echo 0 00:04:14.108 19:59:11 -- setup/common.sh@33 -- # return 0 00:04:14.108 19:59:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:14.108 19:59:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:14.108 nr_hugepages=1536 00:04:14.108 19:59:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:14.108 resv_hugepages=0 00:04:14.108 19:59:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:14.108 surplus_hugepages=0 00:04:14.108 19:59:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:14.108 anon_hugepages=0 00:04:14.108 19:59:11 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.108 19:59:11 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:14.108 19:59:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:14.108 19:59:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:14.108 19:59:11 -- setup/common.sh@18 -- # local node= 00:04:14.108 19:59:11 -- setup/common.sh@19 -- # local var val 00:04:14.108 19:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.108 19:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.108 19:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:14.108 19:59:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:14.108 19:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.108 19:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 73527060 kB' 'MemAvailable: 77289616 kB' 'Buffers: 12472 kB' 'Cached: 13893356 kB' 'SwapCached: 0 kB' 'Active: 10645120 kB' 'Inactive: 3702076 kB' 'Active(anon): 10034144 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 444688 kB' 'Mapped: 175112 kB' 'Shmem: 9592776 kB' 'KReclaimable: 205844 kB' 'Slab: 499036 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293192 kB' 'KernelStack: 16000 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962484 kB' 'Committed_AS: 11311116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.108 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.108 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.109 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.109 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:14.110 19:59:11 -- setup/common.sh@33 -- # echo 1536 00:04:14.110 19:59:11 -- setup/common.sh@33 -- # return 0 00:04:14.110 19:59:11 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:14.110 19:59:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:14.110 19:59:11 -- setup/hugepages.sh@27 -- # local node 00:04:14.110 19:59:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.110 19:59:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:14.110 19:59:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:14.110 19:59:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:14.110 19:59:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:14.110 19:59:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:14.110 19:59:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.110 19:59:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.110 19:59:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:14.110 19:59:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.110 19:59:11 -- setup/common.sh@18 -- # local node=0 00:04:14.110 19:59:11 -- setup/common.sh@19 -- # local var val 00:04:14.110 19:59:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.110 19:59:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.110 19:59:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:14.110 19:59:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:14.110 19:59:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.110 19:59:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 39936148 kB' 'MemUsed: 8133740 kB' 'SwapCached: 0 kB' 'Active: 6170528 kB' 'Inactive: 263784 kB' 'Active(anon): 5690732 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027640 kB' 'Mapped: 174592 kB' 'AnonPages: 409800 kB' 'Shmem: 5284060 kB' 'KernelStack: 8888 kB' 'PageTables: 4884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114636 kB' 'Slab: 265712 kB' 'SReclaimable: 114636 kB' 'SUnreclaim: 151076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.110 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.110 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@33 -- # echo 0 00:04:14.111 19:59:12 -- setup/common.sh@33 -- # return 0 00:04:14.111 19:59:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.111 19:59:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:14.111 19:59:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:14.111 19:59:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:14.111 19:59:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:14.111 19:59:12 -- setup/common.sh@18 -- # local node=1 00:04:14.111 19:59:12 -- setup/common.sh@19 -- # local var val 00:04:14.111 19:59:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:14.111 19:59:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:14.111 19:59:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:14.111 19:59:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:14.111 19:59:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:14.111 19:59:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223604 kB' 'MemFree: 33591868 kB' 'MemUsed: 10631736 kB' 'SwapCached: 0 kB' 'Active: 4474604 kB' 'Inactive: 3438292 kB' 'Active(anon): 4343424 kB' 'Inactive(anon): 0 kB' 'Active(file): 131180 kB' 'Inactive(file): 3438292 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7878216 kB' 'Mapped: 520 kB' 'AnonPages: 34888 kB' 'Shmem: 4308744 kB' 'KernelStack: 7112 kB' 'PageTables: 2724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 91208 kB' 'Slab: 233324 kB' 'SReclaimable: 91208 kB' 'SUnreclaim: 142116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.111 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.111 19:59:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.112 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.112 19:59:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # continue 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:14.372 19:59:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:14.372 19:59:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:14.372 19:59:12 -- setup/common.sh@33 -- # echo 0 00:04:14.372 19:59:12 -- setup/common.sh@33 -- # return 0 00:04:14.373 19:59:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:14.373 19:59:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.373 19:59:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.373 19:59:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.373 19:59:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:14.373 node0=512 expecting 512 00:04:14.373 19:59:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:14.373 19:59:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:14.373 19:59:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:14.373 19:59:12 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:14.373 node1=1024 expecting 1024 00:04:14.373 19:59:12 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:14.373 00:04:14.373 real 0m3.490s 00:04:14.373 user 0m1.360s 00:04:14.373 sys 0m2.220s 00:04:14.373 19:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.373 19:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.373 ************************************ 00:04:14.373 END TEST custom_alloc 00:04:14.373 ************************************ 00:04:14.373 19:59:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:14.373 19:59:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.373 19:59:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.373 19:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:14.373 ************************************ 00:04:14.373 START TEST no_shrink_alloc 00:04:14.373 ************************************ 00:04:14.373 19:59:12 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:14.373 19:59:12 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:14.373 19:59:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:14.373 19:59:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:14.373 19:59:12 -- setup/hugepages.sh@51 -- # shift 00:04:14.373 19:59:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:14.373 19:59:12 -- setup/hugepages.sh@52 -- # local node_ids 00:04:14.373 19:59:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:14.373 19:59:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:14.373 19:59:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:14.373 19:59:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:14.373 19:59:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:14.373 19:59:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:14.373 19:59:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:14.373 19:59:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:14.373 19:59:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:14.373 19:59:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:14.373 19:59:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:14.373 19:59:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:14.373 19:59:12 -- setup/hugepages.sh@73 -- # return 0 00:04:14.373 19:59:12 -- setup/hugepages.sh@198 -- # setup output 00:04:14.373 19:59:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.373 19:59:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:17.673 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:17.673 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:17.673 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:17.673 19:59:15 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:17.673 19:59:15 -- setup/hugepages.sh@89 -- # local node 00:04:17.673 19:59:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:17.673 19:59:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:17.673 19:59:15 -- setup/hugepages.sh@92 -- # local surp 00:04:17.673 19:59:15 -- setup/hugepages.sh@93 -- # local resv 00:04:17.673 19:59:15 -- setup/hugepages.sh@94 -- # local anon 00:04:17.673 19:59:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:17.673 19:59:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:17.673 19:59:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:17.673 19:59:15 -- setup/common.sh@18 -- # local node= 00:04:17.673 19:59:15 -- setup/common.sh@19 -- # local var val 00:04:17.673 19:59:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.673 19:59:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.674 19:59:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.674 19:59:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.674 19:59:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.674 19:59:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74577808 kB' 'MemAvailable: 78340364 kB' 'Buffers: 12472 kB' 'Cached: 13893440 kB' 'SwapCached: 0 kB' 'Active: 10646496 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035520 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445956 kB' 'Mapped: 175084 kB' 'Shmem: 9592860 kB' 'KReclaimable: 205844 kB' 'Slab: 498980 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293136 kB' 'KernelStack: 16112 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11315764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198760 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.674 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.674 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:17.675 19:59:15 -- setup/common.sh@33 -- # echo 0 00:04:17.675 19:59:15 -- setup/common.sh@33 -- # return 0 00:04:17.675 19:59:15 -- setup/hugepages.sh@97 -- # anon=0 00:04:17.675 19:59:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:17.675 19:59:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.675 19:59:15 -- setup/common.sh@18 -- # local node= 00:04:17.675 19:59:15 -- setup/common.sh@19 -- # local var val 00:04:17.675 19:59:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.675 19:59:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.675 19:59:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.675 19:59:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.675 19:59:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.675 19:59:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74577052 kB' 'MemAvailable: 78339608 kB' 'Buffers: 12472 kB' 'Cached: 13893440 kB' 'SwapCached: 0 kB' 'Active: 10646332 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035356 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446256 kB' 'Mapped: 175092 kB' 'Shmem: 9592860 kB' 'KReclaimable: 205844 kB' 'Slab: 498980 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293136 kB' 'KernelStack: 16224 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11314384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198696 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.675 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.675 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.676 19:59:15 -- setup/common.sh@33 -- # echo 0 00:04:17.676 19:59:15 -- setup/common.sh@33 -- # return 0 00:04:17.676 19:59:15 -- setup/hugepages.sh@99 -- # surp=0 00:04:17.676 19:59:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:17.676 19:59:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:17.676 19:59:15 -- setup/common.sh@18 -- # local node= 00:04:17.676 19:59:15 -- setup/common.sh@19 -- # local var val 00:04:17.676 19:59:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.676 19:59:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.676 19:59:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.676 19:59:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.676 19:59:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.676 19:59:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74576872 kB' 'MemAvailable: 78339428 kB' 'Buffers: 12472 kB' 'Cached: 13893444 kB' 'SwapCached: 0 kB' 'Active: 10646692 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035716 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446160 kB' 'Mapped: 175152 kB' 'Shmem: 9592864 kB' 'KReclaimable: 205844 kB' 'Slab: 499004 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293160 kB' 'KernelStack: 16128 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11315788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198760 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.676 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.676 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.677 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.677 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:17.678 19:59:15 -- setup/common.sh@33 -- # echo 0 00:04:17.678 19:59:15 -- setup/common.sh@33 -- # return 0 00:04:17.678 19:59:15 -- setup/hugepages.sh@100 -- # resv=0 00:04:17.678 19:59:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:17.678 nr_hugepages=1024 00:04:17.678 19:59:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:17.678 resv_hugepages=0 00:04:17.678 19:59:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:17.678 surplus_hugepages=0 00:04:17.678 19:59:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:17.678 anon_hugepages=0 00:04:17.678 19:59:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.678 19:59:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:17.678 19:59:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:17.678 19:59:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:17.678 19:59:15 -- setup/common.sh@18 -- # local node= 00:04:17.678 19:59:15 -- setup/common.sh@19 -- # local var val 00:04:17.678 19:59:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.678 19:59:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.678 19:59:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:17.678 19:59:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:17.678 19:59:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.678 19:59:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74576048 kB' 'MemAvailable: 78338604 kB' 'Buffers: 12472 kB' 'Cached: 13893472 kB' 'SwapCached: 0 kB' 'Active: 10646300 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035324 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445684 kB' 'Mapped: 175128 kB' 'Shmem: 9592892 kB' 'KReclaimable: 205844 kB' 'Slab: 499004 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293160 kB' 'KernelStack: 16144 kB' 'PageTables: 7720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11315804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198744 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.678 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.678 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.679 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.679 19:59:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:17.680 19:59:15 -- setup/common.sh@33 -- # echo 1024 00:04:17.680 19:59:15 -- setup/common.sh@33 -- # return 0 00:04:17.680 19:59:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:17.680 19:59:15 -- setup/hugepages.sh@112 -- # get_nodes 00:04:17.680 19:59:15 -- setup/hugepages.sh@27 -- # local node 00:04:17.680 19:59:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.680 19:59:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:17.680 19:59:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:17.680 19:59:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:17.680 19:59:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:17.680 19:59:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:17.680 19:59:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:17.680 19:59:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:17.680 19:59:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:17.680 19:59:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:17.680 19:59:15 -- setup/common.sh@18 -- # local node=0 00:04:17.680 19:59:15 -- setup/common.sh@19 -- # local var val 00:04:17.680 19:59:15 -- setup/common.sh@20 -- # local mem_f mem 00:04:17.680 19:59:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:17.680 19:59:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:17.680 19:59:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:17.680 19:59:15 -- setup/common.sh@28 -- # mapfile -t mem 00:04:17.680 19:59:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:17.680 19:59:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 38897504 kB' 'MemUsed: 9172384 kB' 'SwapCached: 0 kB' 'Active: 6171120 kB' 'Inactive: 263784 kB' 'Active(anon): 5691324 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027672 kB' 'Mapped: 174600 kB' 'AnonPages: 410360 kB' 'Shmem: 5284092 kB' 'KernelStack: 8872 kB' 'PageTables: 4788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114636 kB' 'Slab: 265960 kB' 'SReclaimable: 114636 kB' 'SUnreclaim: 151324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.680 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.680 19:59:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # continue 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # IFS=': ' 00:04:17.681 19:59:15 -- setup/common.sh@31 -- # read -r var val _ 00:04:17.681 19:59:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:17.681 19:59:15 -- setup/common.sh@33 -- # echo 0 00:04:17.681 19:59:15 -- setup/common.sh@33 -- # return 0 00:04:17.681 19:59:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:17.681 19:59:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:17.681 19:59:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:17.681 19:59:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:17.681 19:59:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:17.681 node0=1024 expecting 1024 00:04:17.681 19:59:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:17.681 19:59:15 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:17.681 19:59:15 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:17.681 19:59:15 -- setup/hugepages.sh@202 -- # setup output 00:04:17.681 19:59:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.681 19:59:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:04:20.972 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:20.972 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:20.972 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:20.972 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:20.972 19:59:18 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:20.972 19:59:18 -- setup/hugepages.sh@89 -- # local node 00:04:20.972 19:59:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:20.972 19:59:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:20.972 19:59:18 -- setup/hugepages.sh@92 -- # local surp 00:04:20.972 19:59:18 -- setup/hugepages.sh@93 -- # local resv 00:04:20.972 19:59:18 -- setup/hugepages.sh@94 -- # local anon 00:04:20.972 19:59:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:20.972 19:59:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:20.972 19:59:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:20.972 19:59:18 -- setup/common.sh@18 -- # local node= 00:04:20.972 19:59:18 -- setup/common.sh@19 -- # local var val 00:04:20.972 19:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.972 19:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.972 19:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.972 19:59:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.972 19:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.972 19:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74585736 kB' 'MemAvailable: 78348292 kB' 'Buffers: 12472 kB' 'Cached: 13893540 kB' 'SwapCached: 0 kB' 'Active: 10646672 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035696 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445972 kB' 'Mapped: 175128 kB' 'Shmem: 9592960 kB' 'KReclaimable: 205844 kB' 'Slab: 498996 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293152 kB' 'KernelStack: 16048 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11311952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198616 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.972 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.972 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:20.973 19:59:18 -- setup/common.sh@33 -- # echo 0 00:04:20.973 19:59:18 -- setup/common.sh@33 -- # return 0 00:04:20.973 19:59:18 -- setup/hugepages.sh@97 -- # anon=0 00:04:20.973 19:59:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:20.973 19:59:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:20.973 19:59:18 -- setup/common.sh@18 -- # local node= 00:04:20.973 19:59:18 -- setup/common.sh@19 -- # local var val 00:04:20.973 19:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.973 19:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.973 19:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.973 19:59:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.973 19:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.973 19:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74588052 kB' 'MemAvailable: 78350608 kB' 'Buffers: 12472 kB' 'Cached: 13893544 kB' 'SwapCached: 0 kB' 'Active: 10647024 kB' 'Inactive: 3702076 kB' 'Active(anon): 10036048 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446452 kB' 'Mapped: 175124 kB' 'Shmem: 9592964 kB' 'KReclaimable: 205844 kB' 'Slab: 499044 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293200 kB' 'KernelStack: 16064 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11311964 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.973 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.973 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.974 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.974 19:59:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:20.975 19:59:18 -- setup/common.sh@33 -- # echo 0 00:04:20.975 19:59:18 -- setup/common.sh@33 -- # return 0 00:04:20.975 19:59:18 -- setup/hugepages.sh@99 -- # surp=0 00:04:20.975 19:59:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:20.975 19:59:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:20.975 19:59:18 -- setup/common.sh@18 -- # local node= 00:04:20.975 19:59:18 -- setup/common.sh@19 -- # local var val 00:04:20.975 19:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.975 19:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.975 19:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.975 19:59:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.975 19:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.975 19:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74589472 kB' 'MemAvailable: 78352028 kB' 'Buffers: 12472 kB' 'Cached: 13893556 kB' 'SwapCached: 0 kB' 'Active: 10646652 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035676 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 446036 kB' 'Mapped: 175124 kB' 'Shmem: 9592976 kB' 'KReclaimable: 205844 kB' 'Slab: 499044 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293200 kB' 'KernelStack: 16000 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11311976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # continue 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.975 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.975 19:59:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.237 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.237 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.238 19:59:18 -- setup/common.sh@33 -- # echo 0 00:04:21.238 19:59:18 -- setup/common.sh@33 -- # return 0 00:04:21.238 19:59:18 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.238 19:59:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.238 nr_hugepages=1024 00:04:21.238 19:59:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.238 resv_hugepages=0 00:04:21.238 19:59:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.238 surplus_hugepages=0 00:04:21.238 19:59:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.238 anon_hugepages=0 00:04:21.238 19:59:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.238 19:59:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.238 19:59:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.238 19:59:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.238 19:59:18 -- setup/common.sh@18 -- # local node= 00:04:21.238 19:59:18 -- setup/common.sh@19 -- # local var val 00:04:21.238 19:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.238 19:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.238 19:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.238 19:59:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.238 19:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.238 19:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.238 19:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293492 kB' 'MemFree: 74592900 kB' 'MemAvailable: 78355456 kB' 'Buffers: 12472 kB' 'Cached: 13893560 kB' 'SwapCached: 0 kB' 'Active: 10646380 kB' 'Inactive: 3702076 kB' 'Active(anon): 10035404 kB' 'Inactive(anon): 0 kB' 'Active(file): 610976 kB' 'Inactive(file): 3702076 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 445756 kB' 'Mapped: 175124 kB' 'Shmem: 9592980 kB' 'KReclaimable: 205844 kB' 'Slab: 499044 kB' 'SReclaimable: 205844 kB' 'SUnreclaim: 293200 kB' 'KernelStack: 16000 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486772 kB' 'Committed_AS: 11311992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198600 kB' 'VmallocChunk: 0 kB' 'Percpu: 51520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 470440 kB' 'DirectMap2M: 9691136 kB' 'DirectMap1G: 91226112 kB' 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.238 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.238 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.239 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.239 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.240 19:59:18 -- setup/common.sh@33 -- # echo 1024 00:04:21.240 19:59:18 -- setup/common.sh@33 -- # return 0 00:04:21.240 19:59:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.240 19:59:18 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.240 19:59:18 -- setup/hugepages.sh@27 -- # local node 00:04:21.240 19:59:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.240 19:59:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.240 19:59:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.240 19:59:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.240 19:59:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.240 19:59:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.240 19:59:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.240 19:59:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.240 19:59:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.240 19:59:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.240 19:59:18 -- setup/common.sh@18 -- # local node=0 00:04:21.240 19:59:18 -- setup/common.sh@19 -- # local var val 00:04:21.240 19:59:18 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.240 19:59:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.240 19:59:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.240 19:59:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.240 19:59:18 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.240 19:59:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069888 kB' 'MemFree: 38915492 kB' 'MemUsed: 9154396 kB' 'SwapCached: 0 kB' 'Active: 6171904 kB' 'Inactive: 263784 kB' 'Active(anon): 5692108 kB' 'Inactive(anon): 0 kB' 'Active(file): 479796 kB' 'Inactive(file): 263784 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6027696 kB' 'Mapped: 174596 kB' 'AnonPages: 411236 kB' 'Shmem: 5284116 kB' 'KernelStack: 8920 kB' 'PageTables: 5040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 114636 kB' 'Slab: 265864 kB' 'SReclaimable: 114636 kB' 'SUnreclaim: 151228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.240 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.240 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # continue 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.241 19:59:18 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.241 19:59:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.241 19:59:18 -- setup/common.sh@33 -- # echo 0 00:04:21.241 19:59:18 -- setup/common.sh@33 -- # return 0 00:04:21.241 19:59:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.241 19:59:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.241 19:59:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.241 19:59:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.241 19:59:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.241 node0=1024 expecting 1024 00:04:21.241 19:59:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.241 00:04:21.241 real 0m6.903s 00:04:21.241 user 0m2.558s 00:04:21.241 sys 0m4.525s 00:04:21.241 19:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.241 19:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:21.241 ************************************ 00:04:21.241 END TEST no_shrink_alloc 00:04:21.241 ************************************ 00:04:21.241 19:59:19 -- setup/hugepages.sh@217 -- # clear_hp 00:04:21.241 19:59:19 -- setup/hugepages.sh@37 -- # local node hp 00:04:21.241 19:59:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.241 19:59:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.241 19:59:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.241 19:59:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.241 19:59:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.241 19:59:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.241 19:59:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.241 19:59:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.241 19:59:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.241 19:59:19 -- setup/hugepages.sh@41 -- # echo 0 00:04:21.241 19:59:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.241 19:59:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.241 00:04:21.241 real 0m28.096s 00:04:21.241 user 0m9.440s 00:04:21.241 sys 0m16.218s 00:04:21.241 19:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.241 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.241 ************************************ 00:04:21.241 END TEST hugepages 00:04:21.241 ************************************ 00:04:21.241 19:59:19 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh 00:04:21.241 19:59:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:21.241 19:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:21.241 19:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:21.241 ************************************ 00:04:21.241 START TEST driver 00:04:21.241 ************************************ 00:04:21.241 19:59:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh 00:04:21.500 * Looking for test storage... 00:04:21.500 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup 00:04:21.500 19:59:19 -- setup/driver.sh@68 -- # setup reset 00:04:21.501 19:59:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.501 19:59:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.801 19:59:23 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:26.801 19:59:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.801 19:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.801 19:59:23 -- common/autotest_common.sh@10 -- # set +x 00:04:26.801 ************************************ 00:04:26.801 START TEST guess_driver 00:04:26.801 ************************************ 00:04:26.801 19:59:23 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:26.801 19:59:23 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:26.801 19:59:23 -- setup/driver.sh@47 -- # local fail=0 00:04:26.801 19:59:23 -- setup/driver.sh@49 -- # pick_driver 00:04:26.801 19:59:23 -- setup/driver.sh@36 -- # vfio 00:04:26.801 19:59:23 -- setup/driver.sh@21 -- # local iommu_grups 00:04:26.801 19:59:23 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:26.801 19:59:23 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:26.801 19:59:23 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:26.801 19:59:23 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:26.801 19:59:23 -- setup/driver.sh@29 -- # (( 162 > 0 )) 00:04:26.801 19:59:23 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:26.801 19:59:23 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:26.801 19:59:23 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:26.801 19:59:23 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:26.801 19:59:23 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:26.801 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:26.801 19:59:23 -- setup/driver.sh@30 -- # return 0 00:04:26.801 19:59:23 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:26.801 19:59:23 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:26.801 19:59:23 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:26.801 19:59:23 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:26.801 Looking for driver=vfio-pci 00:04:26.801 19:59:23 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:26.801 19:59:23 -- setup/driver.sh@45 -- # setup output config 00:04:26.801 19:59:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.801 19:59:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:04:29.396 19:59:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:26 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:26 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:26 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.396 19:59:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:29.396 19:59:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:29.396 19:59:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.686 19:59:30 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.686 19:59:30 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.686 19:59:30 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.686 19:59:30 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:32.686 19:59:30 -- setup/driver.sh@65 -- # setup reset 00:04:32.686 19:59:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.686 19:59:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.963 00:04:37.963 real 0m11.361s 00:04:37.963 user 0m2.516s 00:04:37.963 sys 0m5.008s 00:04:37.963 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.963 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.963 ************************************ 00:04:37.963 END TEST guess_driver 00:04:37.963 ************************************ 00:04:37.963 00:04:37.963 real 0m15.987s 00:04:37.963 user 0m3.690s 00:04:37.963 sys 0m7.641s 00:04:37.963 19:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.963 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.963 ************************************ 00:04:37.963 END TEST driver 00:04:37.963 ************************************ 00:04:37.963 19:59:35 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh 00:04:37.963 19:59:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.963 19:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.963 19:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:37.963 ************************************ 00:04:37.963 START TEST devices 00:04:37.963 ************************************ 00:04:37.963 19:59:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh 00:04:37.963 * Looking for test storage... 00:04:37.963 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup 00:04:37.963 19:59:35 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:37.963 19:59:35 -- setup/devices.sh@192 -- # setup reset 00:04:37.963 19:59:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.963 19:59:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.260 19:59:38 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.260 19:59:38 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:41.260 19:59:38 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:41.260 19:59:38 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:41.260 19:59:38 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:41.260 19:59:38 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:41.260 19:59:38 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:41.260 19:59:38 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.260 19:59:38 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:41.260 19:59:38 -- setup/devices.sh@196 -- # blocks=() 00:04:41.260 19:59:38 -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.260 19:59:38 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.260 19:59:38 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.260 19:59:38 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.260 19:59:38 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.260 19:59:38 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.260 19:59:38 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.260 19:59:38 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:41.260 19:59:38 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:41.260 19:59:38 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.260 19:59:38 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:41.260 19:59:38 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.260 No valid GPT data, bailing 00:04:41.260 19:59:38 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.260 19:59:38 -- scripts/common.sh@393 -- # pt= 00:04:41.260 19:59:38 -- scripts/common.sh@394 -- # return 1 00:04:41.260 19:59:38 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.260 19:59:38 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.260 19:59:38 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.260 19:59:38 -- setup/common.sh@80 -- # echo 4000787030016 00:04:41.260 19:59:38 -- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size )) 00:04:41.260 19:59:38 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.260 19:59:38 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:41.260 19:59:38 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.260 19:59:38 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.260 19:59:38 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.260 19:59:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.260 19:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.260 19:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:41.260 ************************************ 00:04:41.260 START TEST nvme_mount 00:04:41.260 ************************************ 00:04:41.260 19:59:38 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:41.260 19:59:38 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.260 19:59:38 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.260 19:59:38 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.260 19:59:38 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.260 19:59:38 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.260 19:59:38 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.260 19:59:38 -- setup/common.sh@40 -- # local part_no=1 00:04:41.260 19:59:38 -- setup/common.sh@41 -- # local size=1073741824 00:04:41.260 19:59:38 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.260 19:59:38 -- setup/common.sh@44 -- # parts=() 00:04:41.260 19:59:38 -- setup/common.sh@44 -- # local parts 00:04:41.260 19:59:38 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.260 19:59:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.260 19:59:38 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.260 19:59:38 -- setup/common.sh@46 -- # (( part++ )) 00:04:41.260 19:59:38 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.260 19:59:38 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.260 19:59:38 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.260 19:59:38 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.196 Creating new GPT entries in memory. 00:04:42.196 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.196 other utilities. 00:04:42.196 19:59:39 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.196 19:59:39 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.196 19:59:39 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.196 19:59:39 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.196 19:59:39 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.133 Creating new GPT entries in memory. 00:04:43.133 The operation has completed successfully. 00:04:43.133 19:59:40 -- setup/common.sh@57 -- # (( part++ )) 00:04:43.133 19:59:40 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.133 19:59:40 -- setup/common.sh@62 -- # wait 2027690 00:04:43.133 19:59:40 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.133 19:59:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.133 19:59:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.133 19:59:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.133 19:59:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.133 19:59:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.133 19:59:41 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.133 19:59:41 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.133 19:59:41 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.133 19:59:41 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.133 19:59:41 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.133 19:59:41 -- setup/devices.sh@53 -- # local found=0 00:04:43.133 19:59:41 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.133 19:59:41 -- setup/devices.sh@56 -- # : 00:04:43.133 19:59:41 -- setup/devices.sh@59 -- # local pci status 00:04:43.133 19:59:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.133 19:59:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.133 19:59:41 -- setup/devices.sh@47 -- # setup output config 00:04:43.133 19:59:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.133 19:59:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:46.421 19:59:43 -- setup/devices.sh@63 -- # found=1 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.421 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.421 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.422 19:59:44 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.422 19:59:44 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:46.422 19:59:44 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.422 19:59:44 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.422 19:59:44 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.422 19:59:44 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:46.422 19:59:44 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.422 19:59:44 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.422 19:59:44 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.422 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.422 19:59:44 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.422 19:59:44 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.681 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.681 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.681 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.681 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.681 19:59:44 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:46.681 19:59:44 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:46.681 19:59:44 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.681 19:59:44 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:46.681 19:59:44 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:46.681 19:59:44 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.681 19:59:44 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.681 19:59:44 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:46.681 19:59:44 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:46.681 19:59:44 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.681 19:59:44 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.681 19:59:44 -- setup/devices.sh@53 -- # local found=0 00:04:46.681 19:59:44 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.681 19:59:44 -- setup/devices.sh@56 -- # : 00:04:46.681 19:59:44 -- setup/devices.sh@59 -- # local pci status 00:04:46.681 19:59:44 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.681 19:59:44 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:46.681 19:59:44 -- setup/devices.sh@47 -- # setup output config 00:04:46.681 19:59:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.681 19:59:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.971 19:59:47 -- setup/devices.sh@63 -- # found=1 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.971 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.971 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.972 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.972 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.972 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.972 19:59:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.972 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.972 19:59:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.972 19:59:47 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.972 19:59:47 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.972 19:59:47 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.972 19:59:47 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.972 19:59:47 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.972 19:59:47 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:49.972 19:59:47 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:49.972 19:59:47 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.972 19:59:47 -- setup/devices.sh@50 -- # local mount_point= 00:04:49.972 19:59:47 -- setup/devices.sh@51 -- # local test_file= 00:04:49.972 19:59:47 -- setup/devices.sh@53 -- # local found=0 00:04:49.972 19:59:47 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.972 19:59:47 -- setup/devices.sh@59 -- # local pci status 00:04:49.972 19:59:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.972 19:59:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:49.972 19:59:47 -- setup/devices.sh@47 -- # setup output config 00:04:49.972 19:59:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.972 19:59:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:53.260 19:59:50 -- setup/devices.sh@63 -- # found=1 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:53.260 19:59:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.260 19:59:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.260 19:59:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.260 19:59:51 -- setup/devices.sh@68 -- # return 0 00:04:53.260 19:59:51 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:53.261 19:59:51 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.261 19:59:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.261 19:59:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.261 19:59:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.261 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.261 00:04:53.261 real 0m12.161s 00:04:53.261 user 0m3.539s 00:04:53.261 sys 0m6.521s 00:04:53.261 19:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.261 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 ************************************ 00:04:53.261 END TEST nvme_mount 00:04:53.261 ************************************ 00:04:53.261 19:59:51 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:53.261 19:59:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.261 19:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.261 19:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:53.261 ************************************ 00:04:53.261 START TEST dm_mount 00:04:53.261 ************************************ 00:04:53.261 19:59:51 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:53.261 19:59:51 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:53.261 19:59:51 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:53.261 19:59:51 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:53.261 19:59:51 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:53.261 19:59:51 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:53.261 19:59:51 -- setup/common.sh@40 -- # local part_no=2 00:04:53.261 19:59:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:53.261 19:59:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:53.261 19:59:51 -- setup/common.sh@44 -- # parts=() 00:04:53.261 19:59:51 -- setup/common.sh@44 -- # local parts 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.261 19:59:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.261 19:59:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:53.261 19:59:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:53.261 19:59:51 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:53.261 19:59:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:53.261 19:59:51 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:54.198 Creating new GPT entries in memory. 00:04:54.198 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:54.198 other utilities. 00:04:54.198 19:59:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:54.198 19:59:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.198 19:59:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.198 19:59:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.198 19:59:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:55.575 Creating new GPT entries in memory. 00:04:55.575 The operation has completed successfully. 00:04:55.575 19:59:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:55.575 19:59:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.575 19:59:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.575 19:59:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.575 19:59:53 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:56.589 The operation has completed successfully. 00:04:56.589 19:59:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:56.589 19:59:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.589 19:59:54 -- setup/common.sh@62 -- # wait 2031522 00:04:56.589 19:59:54 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:56.589 19:59:54 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:56.589 19:59:54 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:56.589 19:59:54 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:56.589 19:59:54 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:56.589 19:59:54 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:56.589 19:59:54 -- setup/devices.sh@161 -- # break 00:04:56.589 19:59:54 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:56.590 19:59:54 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:56.590 19:59:54 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:56.590 19:59:54 -- setup/devices.sh@166 -- # dm=dm-0 00:04:56.590 19:59:54 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:56.590 19:59:54 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:56.590 19:59:54 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:56.590 19:59:54 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount size= 00:04:56.590 19:59:54 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:56.590 19:59:54 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:56.590 19:59:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:56.590 19:59:54 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:56.590 19:59:54 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:56.590 19:59:54 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:56.590 19:59:54 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:56.590 19:59:54 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:56.590 19:59:54 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:56.590 19:59:54 -- setup/devices.sh@53 -- # local found=0 00:04:56.590 19:59:54 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:56.590 19:59:54 -- setup/devices.sh@56 -- # : 00:04:56.590 19:59:54 -- setup/devices.sh@59 -- # local pci status 00:04:56.590 19:59:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.590 19:59:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:56.590 19:59:54 -- setup/devices.sh@47 -- # setup output config 00:04:56.590 19:59:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.590 19:59:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.881 19:59:57 -- setup/devices.sh@63 -- # found=1 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.881 19:59:57 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:59.881 19:59:57 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:59.881 19:59:57 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.881 19:59:57 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:59.881 19:59:57 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:04:59.881 19:59:57 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:59.881 19:59:57 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:59.881 19:59:57 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:59.881 19:59:57 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.881 19:59:57 -- setup/devices.sh@51 -- # local test_file= 00:04:59.881 19:59:57 -- setup/devices.sh@53 -- # local found=0 00:04:59.881 19:59:57 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.881 19:59:57 -- setup/devices.sh@59 -- # local pci status 00:04:59.881 19:59:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.881 19:59:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:59.882 19:59:57 -- setup/devices.sh@47 -- # setup output config 00:04:59.882 19:59:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.882 19:59:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config 00:05:03.174 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.175 20:00:00 -- setup/devices.sh@63 -- # found=1 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.175 20:00:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.175 20:00:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:03.175 20:00:00 -- setup/devices.sh@68 -- # return 0 00:05:03.175 20:00:00 -- setup/devices.sh@187 -- # cleanup_dm 00:05:03.175 20:00:00 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:05:03.175 20:00:00 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:03.175 20:00:00 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:03.175 20:00:00 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:03.175 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:03.175 20:00:00 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:03.175 00:05:03.175 real 0m9.823s 00:05:03.175 user 0m2.353s 00:05:03.175 sys 0m4.538s 00:05:03.175 20:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.175 20:00:00 -- common/autotest_common.sh@10 -- # set +x 00:05:03.175 ************************************ 00:05:03.175 END TEST dm_mount 00:05:03.175 ************************************ 00:05:03.175 20:00:00 -- setup/devices.sh@1 -- # cleanup 00:05:03.175 20:00:00 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:03.175 20:00:00 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.175 20:00:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:03.175 20:00:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.175 20:00:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:03.434 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:03.434 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:03.434 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:03.434 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:03.434 20:00:01 -- setup/devices.sh@12 -- # cleanup_dm 00:05:03.434 20:00:01 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount 00:05:03.434 20:00:01 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:03.434 20:00:01 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:03.434 20:00:01 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:03.434 20:00:01 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:03.434 20:00:01 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:03.434 00:05:03.434 real 0m26.113s 00:05:03.434 user 0m7.234s 00:05:03.434 sys 0m13.733s 00:05:03.434 20:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.434 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.434 ************************************ 00:05:03.434 END TEST devices 00:05:03.434 ************************************ 00:05:03.434 00:05:03.434 real 1m36.208s 00:05:03.434 user 0m28.002s 00:05:03.434 sys 0m52.442s 00:05:03.434 20:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.434 20:00:01 -- common/autotest_common.sh@10 -- # set +x 00:05:03.434 ************************************ 00:05:03.434 END TEST setup.sh 00:05:03.434 ************************************ 00:05:03.434 20:00:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status 00:05:06.722 Hugepages 00:05:06.722 node hugesize free / total 00:05:06.722 node0 1048576kB 0 / 0 00:05:06.722 node0 2048kB 2048 / 2048 00:05:06.722 node1 1048576kB 0 / 0 00:05:06.722 node1 2048kB 0 / 0 00:05:06.722 00:05:06.722 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.722 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:06.722 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:06.722 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:06.722 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:06.722 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:06.722 20:00:04 -- spdk/autotest.sh@141 -- # uname -s 00:05:06.722 20:00:04 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:06.722 20:00:04 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:06.722 20:00:04 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:05:10.022 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:10.022 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.310 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:13.310 20:00:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:14.246 20:00:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:14.246 20:00:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:14.246 20:00:12 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:14.246 20:00:12 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:14.246 20:00:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:14.246 20:00:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:14.246 20:00:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.246 20:00:12 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.246 20:00:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:14.504 20:00:12 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:14.504 20:00:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:14.504 20:00:12 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:05:17.037 Waiting for block devices as requested 00:05:17.295 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:17.295 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:17.554 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:17.554 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:17.554 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:17.821 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:17.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:17.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.087 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.087 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:18.087 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:18.360 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:18.360 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:18.360 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:18.360 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:18.623 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:18.623 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:18.623 20:00:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:18.623 20:00:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:05:18.623 20:00:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:18.623 20:00:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:18.623 20:00:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:18.623 20:00:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:18.623 20:00:16 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:05:18.623 20:00:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:18.623 20:00:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:18.623 20:00:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:18.623 20:00:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:18.623 20:00:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:18.623 20:00:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:18.623 20:00:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:18.623 20:00:16 -- common/autotest_common.sh@1542 -- # continue 00:05:18.623 20:00:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:18.623 20:00:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.623 20:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.882 20:00:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:18.882 20:00:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:18.882 20:00:16 -- common/autotest_common.sh@10 -- # set +x 00:05:18.882 20:00:16 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:05:22.203 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:22.203 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:25.513 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:25.513 20:00:23 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:25.513 20:00:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:25.513 20:00:23 -- common/autotest_common.sh@10 -- # set +x 00:05:25.513 20:00:23 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:25.513 20:00:23 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:25.513 20:00:23 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:25.513 20:00:23 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:25.513 20:00:23 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:25.513 20:00:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:25.513 20:00:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:25.513 20:00:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:25.513 20:00:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:25.513 20:00:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:25.513 20:00:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:25.513 20:00:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:25.513 20:00:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:05:25.513 20:00:23 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:25.513 20:00:23 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:25.513 20:00:23 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:25.513 20:00:23 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:25.513 20:00:23 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:25.513 20:00:23 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:05:25.513 20:00:23 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:05:25.513 20:00:23 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2040323 00:05:25.513 20:00:23 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:05:25.513 20:00:23 -- common/autotest_common.sh@1583 -- # waitforlisten 2040323 00:05:25.513 20:00:23 -- common/autotest_common.sh@819 -- # '[' -z 2040323 ']' 00:05:25.513 20:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.513 20:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.513 20:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.513 20:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.513 20:00:23 -- common/autotest_common.sh@10 -- # set +x 00:05:25.513 [2024-04-25 20:00:23.254089] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:25.513 [2024-04-25 20:00:23.254181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040323 ] 00:05:25.513 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.513 [2024-04-25 20:00:23.362468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.772 [2024-04-25 20:00:23.460245] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.772 [2024-04-25 20:00:23.460411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.772 [2024-04-25 20:00:23.655002] 'OCF_Core' volume operations registered 00:05:25.772 [2024-04-25 20:00:23.658481] 'OCF_Cache' volume operations registered 00:05:25.772 [2024-04-25 20:00:23.662412] 'OCF Composite' volume operations registered 00:05:25.772 [2024-04-25 20:00:23.665976] 'SPDK_block_device' volume operations registered 00:05:26.341 20:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.341 20:00:24 -- common/autotest_common.sh@852 -- # return 0 00:05:26.341 20:00:24 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:26.341 20:00:24 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:26.341 20:00:24 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:29.671 nvme0n1 00:05:29.671 20:00:27 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:29.671 [2024-04-25 20:00:27.471580] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:29.671 request: 00:05:29.671 { 00:05:29.671 "nvme_ctrlr_name": "nvme0", 00:05:29.671 "password": "test", 00:05:29.671 "method": "bdev_nvme_opal_revert", 00:05:29.671 "req_id": 1 00:05:29.671 } 00:05:29.671 Got JSON-RPC error response 00:05:29.671 response: 00:05:29.671 { 00:05:29.671 "code": -32602, 00:05:29.671 "message": "Invalid parameters" 00:05:29.671 } 00:05:29.671 20:00:27 -- common/autotest_common.sh@1589 -- # true 00:05:29.671 20:00:27 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:29.671 20:00:27 -- common/autotest_common.sh@1593 -- # killprocess 2040323 00:05:29.671 20:00:27 -- common/autotest_common.sh@926 -- # '[' -z 2040323 ']' 00:05:29.671 20:00:27 -- common/autotest_common.sh@930 -- # kill -0 2040323 00:05:29.671 20:00:27 -- common/autotest_common.sh@931 -- # uname 00:05:29.671 20:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:29.671 20:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2040323 00:05:29.671 20:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:29.671 20:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:29.671 20:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2040323' 00:05:29.671 killing process with pid 2040323 00:05:29.671 20:00:27 -- common/autotest_common.sh@945 -- # kill 2040323 00:05:29.671 20:00:27 -- common/autotest_common.sh@950 -- # wait 2040323 00:05:33.857 20:00:31 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:33.857 20:00:31 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:33.857 20:00:31 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:33.857 20:00:31 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:33.857 20:00:31 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:33.857 20:00:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:33.857 20:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.857 20:00:31 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:05:33.857 20:00:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.857 20:00:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.857 20:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:33.857 ************************************ 00:05:33.857 START TEST env 00:05:33.857 ************************************ 00:05:34.115 20:00:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh 00:05:34.115 * Looking for test storage... 00:05:34.115 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env 00:05:34.115 20:00:31 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.115 20:00:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.115 20:00:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.115 20:00:31 -- common/autotest_common.sh@10 -- # set +x 00:05:34.115 ************************************ 00:05:34.115 START TEST env_memory 00:05:34.115 ************************************ 00:05:34.115 20:00:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut 00:05:34.115 00:05:34.115 00:05:34.115 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.115 http://cunit.sourceforge.net/ 00:05:34.115 00:05:34.115 00:05:34.115 Suite: memory 00:05:34.115 Test: alloc and free memory map ...[2024-04-25 20:00:31.943379] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:34.115 passed 00:05:34.115 Test: mem map translation ...[2024-04-25 20:00:31.972667] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:34.115 [2024-04-25 20:00:31.972690] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:34.115 [2024-04-25 20:00:31.972745] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:34.115 [2024-04-25 20:00:31.972758] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:34.115 passed 00:05:34.115 Test: mem map registration ...[2024-04-25 20:00:32.030448] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:34.115 [2024-04-25 20:00:32.030470] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:34.115 passed 00:05:34.375 Test: mem map adjacent registrations ...passed 00:05:34.375 00:05:34.375 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.375 suites 1 1 n/a 0 0 00:05:34.375 tests 4 4 4 0 0 00:05:34.375 asserts 152 152 152 0 n/a 00:05:34.375 00:05:34.375 Elapsed time = 0.195 seconds 00:05:34.375 00:05:34.375 real 0m0.206s 00:05:34.375 user 0m0.193s 00:05:34.375 sys 0m0.012s 00:05:34.375 20:00:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.375 20:00:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.375 ************************************ 00:05:34.375 END TEST env_memory 00:05:34.375 ************************************ 00:05:34.375 20:00:32 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.375 20:00:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.375 20:00:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.375 20:00:32 -- common/autotest_common.sh@10 -- # set +x 00:05:34.375 ************************************ 00:05:34.375 START TEST env_vtophys 00:05:34.375 ************************************ 00:05:34.375 20:00:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:34.375 EAL: lib.eal log level changed from notice to debug 00:05:34.375 EAL: Detected lcore 0 as core 0 on socket 0 00:05:34.375 EAL: Detected lcore 1 as core 1 on socket 0 00:05:34.375 EAL: Detected lcore 2 as core 2 on socket 0 00:05:34.375 EAL: Detected lcore 3 as core 3 on socket 0 00:05:34.375 EAL: Detected lcore 4 as core 4 on socket 0 00:05:34.375 EAL: Detected lcore 5 as core 8 on socket 0 00:05:34.375 EAL: Detected lcore 6 as core 9 on socket 0 00:05:34.375 EAL: Detected lcore 7 as core 10 on socket 0 00:05:34.375 EAL: Detected lcore 8 as core 11 on socket 0 00:05:34.375 EAL: Detected lcore 9 as core 16 on socket 0 00:05:34.375 EAL: Detected lcore 10 as core 17 on socket 0 00:05:34.375 EAL: Detected lcore 11 as core 18 on socket 0 00:05:34.375 EAL: Detected lcore 12 as core 19 on socket 0 00:05:34.375 EAL: Detected lcore 13 as core 20 on socket 0 00:05:34.375 EAL: Detected lcore 14 as core 24 on socket 0 00:05:34.375 EAL: Detected lcore 15 as core 25 on socket 0 00:05:34.375 EAL: Detected lcore 16 as core 26 on socket 0 00:05:34.375 EAL: Detected lcore 17 as core 27 on socket 0 00:05:34.375 EAL: Detected lcore 18 as core 0 on socket 1 00:05:34.375 EAL: Detected lcore 19 as core 1 on socket 1 00:05:34.375 EAL: Detected lcore 20 as core 2 on socket 1 00:05:34.375 EAL: Detected lcore 21 as core 3 on socket 1 00:05:34.375 EAL: Detected lcore 22 as core 4 on socket 1 00:05:34.375 EAL: Detected lcore 23 as core 8 on socket 1 00:05:34.375 EAL: Detected lcore 24 as core 9 on socket 1 00:05:34.375 EAL: Detected lcore 25 as core 10 on socket 1 00:05:34.375 EAL: Detected lcore 26 as core 11 on socket 1 00:05:34.375 EAL: Detected lcore 27 as core 16 on socket 1 00:05:34.375 EAL: Detected lcore 28 as core 17 on socket 1 00:05:34.375 EAL: Detected lcore 29 as core 18 on socket 1 00:05:34.375 EAL: Detected lcore 30 as core 19 on socket 1 00:05:34.375 EAL: Detected lcore 31 as core 20 on socket 1 00:05:34.375 EAL: Detected lcore 32 as core 24 on socket 1 00:05:34.375 EAL: Detected lcore 33 as core 25 on socket 1 00:05:34.375 EAL: Detected lcore 34 as core 26 on socket 1 00:05:34.375 EAL: Detected lcore 35 as core 27 on socket 1 00:05:34.375 EAL: Detected lcore 36 as core 0 on socket 0 00:05:34.375 EAL: Detected lcore 37 as core 1 on socket 0 00:05:34.375 EAL: Detected lcore 38 as core 2 on socket 0 00:05:34.375 EAL: Detected lcore 39 as core 3 on socket 0 00:05:34.375 EAL: Detected lcore 40 as core 4 on socket 0 00:05:34.375 EAL: Detected lcore 41 as core 8 on socket 0 00:05:34.375 EAL: Detected lcore 42 as core 9 on socket 0 00:05:34.375 EAL: Detected lcore 43 as core 10 on socket 0 00:05:34.375 EAL: Detected lcore 44 as core 11 on socket 0 00:05:34.375 EAL: Detected lcore 45 as core 16 on socket 0 00:05:34.375 EAL: Detected lcore 46 as core 17 on socket 0 00:05:34.375 EAL: Detected lcore 47 as core 18 on socket 0 00:05:34.375 EAL: Detected lcore 48 as core 19 on socket 0 00:05:34.375 EAL: Detected lcore 49 as core 20 on socket 0 00:05:34.375 EAL: Detected lcore 50 as core 24 on socket 0 00:05:34.375 EAL: Detected lcore 51 as core 25 on socket 0 00:05:34.375 EAL: Detected lcore 52 as core 26 on socket 0 00:05:34.375 EAL: Detected lcore 53 as core 27 on socket 0 00:05:34.375 EAL: Detected lcore 54 as core 0 on socket 1 00:05:34.375 EAL: Detected lcore 55 as core 1 on socket 1 00:05:34.375 EAL: Detected lcore 56 as core 2 on socket 1 00:05:34.375 EAL: Detected lcore 57 as core 3 on socket 1 00:05:34.375 EAL: Detected lcore 58 as core 4 on socket 1 00:05:34.375 EAL: Detected lcore 59 as core 8 on socket 1 00:05:34.375 EAL: Detected lcore 60 as core 9 on socket 1 00:05:34.375 EAL: Detected lcore 61 as core 10 on socket 1 00:05:34.375 EAL: Detected lcore 62 as core 11 on socket 1 00:05:34.375 EAL: Detected lcore 63 as core 16 on socket 1 00:05:34.375 EAL: Detected lcore 64 as core 17 on socket 1 00:05:34.375 EAL: Detected lcore 65 as core 18 on socket 1 00:05:34.375 EAL: Detected lcore 66 as core 19 on socket 1 00:05:34.375 EAL: Detected lcore 67 as core 20 on socket 1 00:05:34.375 EAL: Detected lcore 68 as core 24 on socket 1 00:05:34.375 EAL: Detected lcore 69 as core 25 on socket 1 00:05:34.375 EAL: Detected lcore 70 as core 26 on socket 1 00:05:34.375 EAL: Detected lcore 71 as core 27 on socket 1 00:05:34.375 EAL: Maximum logical cores by configuration: 128 00:05:34.375 EAL: Detected CPU lcores: 72 00:05:34.375 EAL: Detected NUMA nodes: 2 00:05:34.375 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:34.375 EAL: Detected shared linkage of DPDK 00:05:34.375 EAL: No shared files mode enabled, IPC will be disabled 00:05:34.375 EAL: Bus pci wants IOVA as 'DC' 00:05:34.375 EAL: Buses did not request a specific IOVA mode. 00:05:34.375 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:34.375 EAL: Selected IOVA mode 'VA' 00:05:34.375 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.375 EAL: Probing VFIO support... 00:05:34.375 EAL: IOMMU type 1 (Type 1) is supported 00:05:34.375 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:34.375 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:34.375 EAL: VFIO support initialized 00:05:34.375 EAL: Ask a virtual area of 0x2e000 bytes 00:05:34.375 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:34.375 EAL: Setting up physically contiguous memory... 00:05:34.375 EAL: Setting maximum number of open files to 524288 00:05:34.375 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:34.375 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:34.375 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:34.375 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.375 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:34.375 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.375 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.375 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:34.375 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:34.376 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:34.376 EAL: Ask a virtual area of 0x61000 bytes 00:05:34.376 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:34.376 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:34.376 EAL: Ask a virtual area of 0x400000000 bytes 00:05:34.376 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:34.376 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:34.376 EAL: Hugepages will be freed exactly as allocated. 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: TSC frequency is ~2300000 KHz 00:05:34.376 EAL: Main lcore 0 is ready (tid=7fd2ab645a00;cpuset=[0]) 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 0 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 2MB 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:34.376 EAL: Mem event callback 'spdk:(nil)' registered 00:05:34.376 00:05:34.376 00:05:34.376 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.376 http://cunit.sourceforge.net/ 00:05:34.376 00:05:34.376 00:05:34.376 Suite: components_suite 00:05:34.376 Test: vtophys_malloc_test ...passed 00:05:34.376 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 4 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 4 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 4 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 4 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.376 EAL: Restoring previous memory policy: 4 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.376 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.376 EAL: request: mp_malloc_sync 00:05:34.376 EAL: No shared files mode enabled, IPC is disabled 00:05:34.376 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.376 EAL: Trying to obtain current memory policy. 00:05:34.376 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.637 EAL: Restoring previous memory policy: 4 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.637 EAL: Trying to obtain current memory policy. 00:05:34.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.637 EAL: Restoring previous memory policy: 4 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was expanded by 130MB 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was shrunk by 130MB 00:05:34.637 EAL: Trying to obtain current memory policy. 00:05:34.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.637 EAL: Restoring previous memory policy: 4 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was expanded by 258MB 00:05:34.637 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.637 EAL: request: mp_malloc_sync 00:05:34.637 EAL: No shared files mode enabled, IPC is disabled 00:05:34.637 EAL: Heap on socket 0 was shrunk by 258MB 00:05:34.637 EAL: Trying to obtain current memory policy. 00:05:34.637 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.896 EAL: Restoring previous memory policy: 4 00:05:34.896 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.896 EAL: request: mp_malloc_sync 00:05:34.896 EAL: No shared files mode enabled, IPC is disabled 00:05:34.896 EAL: Heap on socket 0 was expanded by 514MB 00:05:34.896 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.156 EAL: request: mp_malloc_sync 00:05:35.156 EAL: No shared files mode enabled, IPC is disabled 00:05:35.156 EAL: Heap on socket 0 was shrunk by 514MB 00:05:35.156 EAL: Trying to obtain current memory policy. 00:05:35.156 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.414 EAL: Restoring previous memory policy: 4 00:05:35.414 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.414 EAL: request: mp_malloc_sync 00:05:35.414 EAL: No shared files mode enabled, IPC is disabled 00:05:35.414 EAL: Heap on socket 0 was expanded by 1026MB 00:05:35.414 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.675 EAL: request: mp_malloc_sync 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:35.675 passed 00:05:35.675 00:05:35.675 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.675 suites 1 1 n/a 0 0 00:05:35.675 tests 2 2 2 0 0 00:05:35.675 asserts 497 497 497 0 n/a 00:05:35.675 00:05:35.675 Elapsed time = 1.201 seconds 00:05:35.675 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.675 EAL: request: mp_malloc_sync 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 EAL: Heap on socket 0 was shrunk by 2MB 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 EAL: No shared files mode enabled, IPC is disabled 00:05:35.675 00:05:35.675 real 0m1.355s 00:05:35.675 user 0m0.784s 00:05:35.675 sys 0m0.543s 00:05:35.675 20:00:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.675 20:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.675 ************************************ 00:05:35.675 END TEST env_vtophys 00:05:35.675 ************************************ 00:05:35.675 20:00:33 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.676 20:00:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.676 20:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.676 20:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.676 ************************************ 00:05:35.676 START TEST env_pci 00:05:35.676 ************************************ 00:05:35.676 20:00:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut 00:05:35.676 00:05:35.676 00:05:35.676 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.676 http://cunit.sourceforge.net/ 00:05:35.676 00:05:35.676 00:05:35.676 Suite: pci 00:05:35.676 Test: pci_hook ...[2024-04-25 20:00:33.582271] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2041702 has claimed it 00:05:35.935 EAL: Cannot find device (10000:00:01.0) 00:05:35.935 EAL: Failed to attach device on primary process 00:05:35.935 passed 00:05:35.935 00:05:35.935 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.935 suites 1 1 n/a 0 0 00:05:35.935 tests 1 1 1 0 0 00:05:35.935 asserts 25 25 25 0 n/a 00:05:35.935 00:05:35.935 Elapsed time = 0.039 seconds 00:05:35.935 00:05:35.935 real 0m0.064s 00:05:35.935 user 0m0.021s 00:05:35.935 sys 0m0.043s 00:05:35.935 20:00:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.935 20:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 ************************************ 00:05:35.935 END TEST env_pci 00:05:35.935 ************************************ 00:05:35.935 20:00:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:35.935 20:00:33 -- env/env.sh@15 -- # uname 00:05:35.935 20:00:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:35.935 20:00:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:35.935 20:00:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.935 20:00:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:35.935 20:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.935 20:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:35.935 ************************************ 00:05:35.935 START TEST env_dpdk_post_init 00:05:35.935 ************************************ 00:05:35.935 20:00:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:35.935 EAL: Detected CPU lcores: 72 00:05:35.935 EAL: Detected NUMA nodes: 2 00:05:35.935 EAL: Detected shared linkage of DPDK 00:05:35.935 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:35.935 EAL: Selected IOVA mode 'VA' 00:05:35.935 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.935 EAL: VFIO support initialized 00:05:35.935 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:35.935 EAL: Using IOMMU type 1 (Type 1) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:36.195 EAL: Ignore mapping IO port bar(1) 00:05:36.195 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:36.764 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:37.023 EAL: Ignore mapping IO port bar(1) 00:05:37.023 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:42.299 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:42.299 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:42.559 Starting DPDK initialization... 00:05:42.559 Starting SPDK post initialization... 00:05:42.559 SPDK NVMe probe 00:05:42.559 Attaching to 0000:5e:00.0 00:05:42.559 Attached to 0000:5e:00.0 00:05:42.559 Cleaning up... 00:05:42.559 00:05:42.559 real 0m6.748s 00:05:42.559 user 0m5.070s 00:05:42.559 sys 0m0.733s 00:05:42.559 20:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.559 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.559 ************************************ 00:05:42.559 END TEST env_dpdk_post_init 00:05:42.559 ************************************ 00:05:42.559 20:00:40 -- env/env.sh@26 -- # uname 00:05:42.559 20:00:40 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.559 20:00:40 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.559 20:00:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.559 20:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.559 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.559 ************************************ 00:05:42.559 START TEST env_mem_callbacks 00:05:42.559 ************************************ 00:05:42.559 20:00:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.817 EAL: Detected CPU lcores: 72 00:05:42.817 EAL: Detected NUMA nodes: 2 00:05:42.817 EAL: Detected shared linkage of DPDK 00:05:42.817 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.817 EAL: Selected IOVA mode 'VA' 00:05:42.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.817 EAL: VFIO support initialized 00:05:42.817 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.817 00:05:42.817 00:05:42.817 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.817 http://cunit.sourceforge.net/ 00:05:42.817 00:05:42.817 00:05:42.817 Suite: memory 00:05:42.817 Test: test ... 00:05:42.817 register 0x200000200000 2097152 00:05:42.817 malloc 3145728 00:05:42.817 register 0x200000400000 4194304 00:05:42.817 buf 0x200000500000 len 3145728 PASSED 00:05:42.817 malloc 64 00:05:42.817 buf 0x2000004fff40 len 64 PASSED 00:05:42.817 malloc 4194304 00:05:42.817 register 0x200000800000 6291456 00:05:42.817 buf 0x200000a00000 len 4194304 PASSED 00:05:42.817 free 0x200000500000 3145728 00:05:42.817 free 0x2000004fff40 64 00:05:42.817 unregister 0x200000400000 4194304 PASSED 00:05:42.817 free 0x200000a00000 4194304 00:05:42.817 unregister 0x200000800000 6291456 PASSED 00:05:42.817 malloc 8388608 00:05:42.817 register 0x200000400000 10485760 00:05:42.817 buf 0x200000600000 len 8388608 PASSED 00:05:42.817 free 0x200000600000 8388608 00:05:42.817 unregister 0x200000400000 10485760 PASSED 00:05:42.817 passed 00:05:42.818 00:05:42.818 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.818 suites 1 1 n/a 0 0 00:05:42.818 tests 1 1 1 0 0 00:05:42.818 asserts 15 15 15 0 n/a 00:05:42.818 00:05:42.818 Elapsed time = 0.008 seconds 00:05:42.818 00:05:42.818 real 0m0.084s 00:05:42.818 user 0m0.028s 00:05:42.818 sys 0m0.055s 00:05:42.818 20:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.818 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.818 ************************************ 00:05:42.818 END TEST env_mem_callbacks 00:05:42.818 ************************************ 00:05:42.818 00:05:42.818 real 0m8.827s 00:05:42.818 user 0m6.225s 00:05:42.818 sys 0m1.680s 00:05:42.818 20:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.818 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.818 ************************************ 00:05:42.818 END TEST env 00:05:42.818 ************************************ 00:05:42.818 20:00:40 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:05:42.818 20:00:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.818 20:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.818 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:42.818 ************************************ 00:05:42.818 START TEST rpc 00:05:42.818 ************************************ 00:05:42.818 20:00:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.077 * Looking for test storage... 00:05:43.077 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:05:43.077 20:00:40 -- rpc/rpc.sh@65 -- # spdk_pid=2042856 00:05:43.077 20:00:40 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.077 20:00:40 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:43.077 20:00:40 -- rpc/rpc.sh@67 -- # waitforlisten 2042856 00:05:43.077 20:00:40 -- common/autotest_common.sh@819 -- # '[' -z 2042856 ']' 00:05:43.077 20:00:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.077 20:00:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.077 20:00:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.077 20:00:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.077 20:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:43.077 [2024-04-25 20:00:40.827160] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:43.077 [2024-04-25 20:00:40.827225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042856 ] 00:05:43.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.077 [2024-04-25 20:00:40.919976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.336 [2024-04-25 20:00:41.021380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.336 [2024-04-25 20:00:41.021525] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.336 [2024-04-25 20:00:41.021540] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2042856' to capture a snapshot of events at runtime. 00:05:43.336 [2024-04-25 20:00:41.021554] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2042856 for offline analysis/debug. 00:05:43.336 [2024-04-25 20:00:41.021582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.336 [2024-04-25 20:00:41.220007] 'OCF_Core' volume operations registered 00:05:43.336 [2024-04-25 20:00:41.223830] 'OCF_Cache' volume operations registered 00:05:43.336 [2024-04-25 20:00:41.227783] 'OCF Composite' volume operations registered 00:05:43.336 [2024-04-25 20:00:41.231302] 'SPDK_block_device' volume operations registered 00:05:43.904 20:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.904 20:00:41 -- common/autotest_common.sh@852 -- # return 0 00:05:43.904 20:00:41 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:05:43.904 20:00:41 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc 00:05:43.904 20:00:41 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:43.904 20:00:41 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:43.904 20:00:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.904 20:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.904 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 ************************************ 00:05:43.904 START TEST rpc_integrity 00:05:43.904 ************************************ 00:05:43.904 20:00:41 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:43.904 20:00:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:43.904 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.904 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.904 20:00:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:43.904 20:00:41 -- rpc/rpc.sh@13 -- # jq length 00:05:43.904 20:00:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:43.904 20:00:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:43.904 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.904 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:43.904 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.904 20:00:41 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:43.904 20:00:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:43.904 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.904 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.163 { 00:05:44.163 "name": "Malloc0", 00:05:44.163 "aliases": [ 00:05:44.163 "31d07af9-ab54-4506-8c32-97d9cff797b4" 00:05:44.163 ], 00:05:44.163 "product_name": "Malloc disk", 00:05:44.163 "block_size": 512, 00:05:44.163 "num_blocks": 16384, 00:05:44.163 "uuid": "31d07af9-ab54-4506-8c32-97d9cff797b4", 00:05:44.163 "assigned_rate_limits": { 00:05:44.163 "rw_ios_per_sec": 0, 00:05:44.163 "rw_mbytes_per_sec": 0, 00:05:44.163 "r_mbytes_per_sec": 0, 00:05:44.163 "w_mbytes_per_sec": 0 00:05:44.163 }, 00:05:44.163 "claimed": false, 00:05:44.163 "zoned": false, 00:05:44.163 "supported_io_types": { 00:05:44.163 "read": true, 00:05:44.163 "write": true, 00:05:44.163 "unmap": true, 00:05:44.163 "write_zeroes": true, 00:05:44.163 "flush": true, 00:05:44.163 "reset": true, 00:05:44.163 "compare": false, 00:05:44.163 "compare_and_write": false, 00:05:44.163 "abort": true, 00:05:44.163 "nvme_admin": false, 00:05:44.163 "nvme_io": false 00:05:44.163 }, 00:05:44.163 "memory_domains": [ 00:05:44.163 { 00:05:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.163 "dma_device_type": 2 00:05:44.163 } 00:05:44.163 ], 00:05:44.163 "driver_specific": {} 00:05:44.163 } 00:05:44.163 ]' 00:05:44.163 20:00:41 -- rpc/rpc.sh@17 -- # jq length 00:05:44.163 20:00:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.163 20:00:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.163 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.163 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 [2024-04-25 20:00:41.897334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.163 [2024-04-25 20:00:41.897382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.163 [2024-04-25 20:00:41.897400] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce3810 00:05:44.163 [2024-04-25 20:00:41.897413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.163 [2024-04-25 20:00:41.899029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.163 [2024-04-25 20:00:41.899059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.163 Passthru0 00:05:44.163 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.163 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.163 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.163 { 00:05:44.163 "name": "Malloc0", 00:05:44.163 "aliases": [ 00:05:44.163 "31d07af9-ab54-4506-8c32-97d9cff797b4" 00:05:44.163 ], 00:05:44.163 "product_name": "Malloc disk", 00:05:44.163 "block_size": 512, 00:05:44.163 "num_blocks": 16384, 00:05:44.163 "uuid": "31d07af9-ab54-4506-8c32-97d9cff797b4", 00:05:44.163 "assigned_rate_limits": { 00:05:44.163 "rw_ios_per_sec": 0, 00:05:44.163 "rw_mbytes_per_sec": 0, 00:05:44.163 "r_mbytes_per_sec": 0, 00:05:44.163 "w_mbytes_per_sec": 0 00:05:44.163 }, 00:05:44.163 "claimed": true, 00:05:44.163 "claim_type": "exclusive_write", 00:05:44.163 "zoned": false, 00:05:44.163 "supported_io_types": { 00:05:44.163 "read": true, 00:05:44.163 "write": true, 00:05:44.163 "unmap": true, 00:05:44.163 "write_zeroes": true, 00:05:44.163 "flush": true, 00:05:44.163 "reset": true, 00:05:44.163 "compare": false, 00:05:44.163 "compare_and_write": false, 00:05:44.163 "abort": true, 00:05:44.163 "nvme_admin": false, 00:05:44.163 "nvme_io": false 00:05:44.163 }, 00:05:44.163 "memory_domains": [ 00:05:44.163 { 00:05:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.163 "dma_device_type": 2 00:05:44.163 } 00:05:44.163 ], 00:05:44.163 "driver_specific": {} 00:05:44.163 }, 00:05:44.163 { 00:05:44.163 "name": "Passthru0", 00:05:44.163 "aliases": [ 00:05:44.163 "673412e8-805f-5ba4-a491-45c71e552ace" 00:05:44.163 ], 00:05:44.163 "product_name": "passthru", 00:05:44.163 "block_size": 512, 00:05:44.163 "num_blocks": 16384, 00:05:44.163 "uuid": "673412e8-805f-5ba4-a491-45c71e552ace", 00:05:44.163 "assigned_rate_limits": { 00:05:44.163 "rw_ios_per_sec": 0, 00:05:44.163 "rw_mbytes_per_sec": 0, 00:05:44.163 "r_mbytes_per_sec": 0, 00:05:44.163 "w_mbytes_per_sec": 0 00:05:44.163 }, 00:05:44.163 "claimed": false, 00:05:44.163 "zoned": false, 00:05:44.163 "supported_io_types": { 00:05:44.163 "read": true, 00:05:44.163 "write": true, 00:05:44.163 "unmap": true, 00:05:44.163 "write_zeroes": true, 00:05:44.163 "flush": true, 00:05:44.163 "reset": true, 00:05:44.163 "compare": false, 00:05:44.163 "compare_and_write": false, 00:05:44.163 "abort": true, 00:05:44.163 "nvme_admin": false, 00:05:44.163 "nvme_io": false 00:05:44.163 }, 00:05:44.163 "memory_domains": [ 00:05:44.163 { 00:05:44.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.163 "dma_device_type": 2 00:05:44.163 } 00:05:44.163 ], 00:05:44.163 "driver_specific": { 00:05:44.163 "passthru": { 00:05:44.163 "name": "Passthru0", 00:05:44.163 "base_bdev_name": "Malloc0" 00:05:44.163 } 00:05:44.163 } 00:05:44.163 } 00:05:44.163 ]' 00:05:44.163 20:00:41 -- rpc/rpc.sh@21 -- # jq length 00:05:44.163 20:00:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.163 20:00:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.163 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.163 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.163 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.163 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 20:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.163 20:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.163 20:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:44.163 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.163 20:00:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.163 20:00:42 -- rpc/rpc.sh@26 -- # jq length 00:05:44.163 20:00:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.163 00:05:44.163 real 0m0.293s 00:05:44.163 user 0m0.183s 00:05:44.164 sys 0m0.053s 00:05:44.164 20:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.164 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.164 ************************************ 00:05:44.164 END TEST rpc_integrity 00:05:44.164 ************************************ 00:05:44.164 20:00:42 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.164 20:00:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.164 20:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.164 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.164 ************************************ 00:05:44.164 START TEST rpc_plugins 00:05:44.164 ************************************ 00:05:44.164 20:00:42 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:44.164 20:00:42 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.164 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.164 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.423 20:00:42 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.423 20:00:42 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.423 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.423 20:00:42 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.423 { 00:05:44.423 "name": "Malloc1", 00:05:44.423 "aliases": [ 00:05:44.423 "6ea7d203-bd3b-42d1-b240-38e69fc7f407" 00:05:44.423 ], 00:05:44.423 "product_name": "Malloc disk", 00:05:44.423 "block_size": 4096, 00:05:44.423 "num_blocks": 256, 00:05:44.423 "uuid": "6ea7d203-bd3b-42d1-b240-38e69fc7f407", 00:05:44.423 "assigned_rate_limits": { 00:05:44.423 "rw_ios_per_sec": 0, 00:05:44.423 "rw_mbytes_per_sec": 0, 00:05:44.423 "r_mbytes_per_sec": 0, 00:05:44.423 "w_mbytes_per_sec": 0 00:05:44.423 }, 00:05:44.423 "claimed": false, 00:05:44.423 "zoned": false, 00:05:44.423 "supported_io_types": { 00:05:44.423 "read": true, 00:05:44.423 "write": true, 00:05:44.423 "unmap": true, 00:05:44.423 "write_zeroes": true, 00:05:44.423 "flush": true, 00:05:44.423 "reset": true, 00:05:44.423 "compare": false, 00:05:44.423 "compare_and_write": false, 00:05:44.423 "abort": true, 00:05:44.423 "nvme_admin": false, 00:05:44.423 "nvme_io": false 00:05:44.423 }, 00:05:44.423 "memory_domains": [ 00:05:44.423 { 00:05:44.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.423 "dma_device_type": 2 00:05:44.423 } 00:05:44.423 ], 00:05:44.423 "driver_specific": {} 00:05:44.423 } 00:05:44.423 ]' 00:05:44.423 20:00:42 -- rpc/rpc.sh@32 -- # jq length 00:05:44.423 20:00:42 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.423 20:00:42 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.423 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.423 20:00:42 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.423 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.423 20:00:42 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.423 20:00:42 -- rpc/rpc.sh@36 -- # jq length 00:05:44.423 20:00:42 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.423 00:05:44.423 real 0m0.133s 00:05:44.423 user 0m0.085s 00:05:44.423 sys 0m0.020s 00:05:44.423 20:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 ************************************ 00:05:44.423 END TEST rpc_plugins 00:05:44.423 ************************************ 00:05:44.423 20:00:42 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:44.423 20:00:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.423 20:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 ************************************ 00:05:44.423 START TEST rpc_trace_cmd_test 00:05:44.423 ************************************ 00:05:44.423 20:00:42 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:44.423 20:00:42 -- rpc/rpc.sh@40 -- # local info 00:05:44.423 20:00:42 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:44.423 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.423 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.423 20:00:42 -- rpc/rpc.sh@42 -- # info='{ 00:05:44.423 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2042856", 00:05:44.423 "tpoint_group_mask": "0x8", 00:05:44.423 "iscsi_conn": { 00:05:44.423 "mask": "0x2", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "scsi": { 00:05:44.423 "mask": "0x4", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "bdev": { 00:05:44.423 "mask": "0x8", 00:05:44.423 "tpoint_mask": "0xffffffffffffffff" 00:05:44.423 }, 00:05:44.423 "nvmf_rdma": { 00:05:44.423 "mask": "0x10", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "nvmf_tcp": { 00:05:44.423 "mask": "0x20", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "ftl": { 00:05:44.423 "mask": "0x40", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "blobfs": { 00:05:44.423 "mask": "0x80", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "dsa": { 00:05:44.423 "mask": "0x200", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "thread": { 00:05:44.423 "mask": "0x400", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "nvme_pcie": { 00:05:44.423 "mask": "0x800", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "iaa": { 00:05:44.423 "mask": "0x1000", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "nvme_tcp": { 00:05:44.423 "mask": "0x2000", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 }, 00:05:44.423 "bdev_nvme": { 00:05:44.423 "mask": "0x4000", 00:05:44.423 "tpoint_mask": "0x0" 00:05:44.423 } 00:05:44.423 }' 00:05:44.423 20:00:42 -- rpc/rpc.sh@43 -- # jq length 00:05:44.423 20:00:42 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:44.423 20:00:42 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.682 20:00:42 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.682 20:00:42 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:44.682 20:00:42 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:44.682 20:00:42 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:44.682 20:00:42 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:44.683 20:00:42 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:44.683 20:00:42 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:44.683 00:05:44.683 real 0m0.245s 00:05:44.683 user 0m0.199s 00:05:44.683 sys 0m0.039s 00:05:44.683 20:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.683 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.683 ************************************ 00:05:44.683 END TEST rpc_trace_cmd_test 00:05:44.683 ************************************ 00:05:44.683 20:00:42 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:44.683 20:00:42 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:44.683 20:00:42 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:44.683 20:00:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.683 20:00:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.683 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.683 ************************************ 00:05:44.683 START TEST rpc_daemon_integrity 00:05:44.683 ************************************ 00:05:44.683 20:00:42 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:44.683 20:00:42 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.683 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.683 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.683 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.683 20:00:42 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.683 20:00:42 -- rpc/rpc.sh@13 -- # jq length 00:05:44.942 20:00:42 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.942 20:00:42 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.942 20:00:42 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:44.942 20:00:42 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.942 20:00:42 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.942 { 00:05:44.942 "name": "Malloc2", 00:05:44.942 "aliases": [ 00:05:44.942 "225a4d02-6236-4f33-814f-e7aa92727d22" 00:05:44.942 ], 00:05:44.942 "product_name": "Malloc disk", 00:05:44.942 "block_size": 512, 00:05:44.942 "num_blocks": 16384, 00:05:44.942 "uuid": "225a4d02-6236-4f33-814f-e7aa92727d22", 00:05:44.942 "assigned_rate_limits": { 00:05:44.942 "rw_ios_per_sec": 0, 00:05:44.942 "rw_mbytes_per_sec": 0, 00:05:44.942 "r_mbytes_per_sec": 0, 00:05:44.942 "w_mbytes_per_sec": 0 00:05:44.942 }, 00:05:44.942 "claimed": false, 00:05:44.942 "zoned": false, 00:05:44.942 "supported_io_types": { 00:05:44.942 "read": true, 00:05:44.942 "write": true, 00:05:44.942 "unmap": true, 00:05:44.942 "write_zeroes": true, 00:05:44.942 "flush": true, 00:05:44.942 "reset": true, 00:05:44.942 "compare": false, 00:05:44.942 "compare_and_write": false, 00:05:44.942 "abort": true, 00:05:44.942 "nvme_admin": false, 00:05:44.942 "nvme_io": false 00:05:44.942 }, 00:05:44.942 "memory_domains": [ 00:05:44.942 { 00:05:44.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.942 "dma_device_type": 2 00:05:44.942 } 00:05:44.942 ], 00:05:44.942 "driver_specific": {} 00:05:44.942 } 00:05:44.942 ]' 00:05:44.942 20:00:42 -- rpc/rpc.sh@17 -- # jq length 00:05:44.942 20:00:42 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.942 20:00:42 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 [2024-04-25 20:00:42.703651] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:44.942 [2024-04-25 20:00:42.703692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.942 [2024-04-25 20:00:42.703715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xce4b30 00:05:44.942 [2024-04-25 20:00:42.703728] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.942 [2024-04-25 20:00:42.705088] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.942 [2024-04-25 20:00:42.705117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.942 Passthru0 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.942 20:00:42 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.942 20:00:42 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.942 { 00:05:44.942 "name": "Malloc2", 00:05:44.942 "aliases": [ 00:05:44.942 "225a4d02-6236-4f33-814f-e7aa92727d22" 00:05:44.942 ], 00:05:44.942 "product_name": "Malloc disk", 00:05:44.942 "block_size": 512, 00:05:44.942 "num_blocks": 16384, 00:05:44.942 "uuid": "225a4d02-6236-4f33-814f-e7aa92727d22", 00:05:44.942 "assigned_rate_limits": { 00:05:44.942 "rw_ios_per_sec": 0, 00:05:44.942 "rw_mbytes_per_sec": 0, 00:05:44.942 "r_mbytes_per_sec": 0, 00:05:44.942 "w_mbytes_per_sec": 0 00:05:44.942 }, 00:05:44.942 "claimed": true, 00:05:44.942 "claim_type": "exclusive_write", 00:05:44.942 "zoned": false, 00:05:44.942 "supported_io_types": { 00:05:44.942 "read": true, 00:05:44.942 "write": true, 00:05:44.942 "unmap": true, 00:05:44.942 "write_zeroes": true, 00:05:44.942 "flush": true, 00:05:44.942 "reset": true, 00:05:44.942 "compare": false, 00:05:44.942 "compare_and_write": false, 00:05:44.942 "abort": true, 00:05:44.942 "nvme_admin": false, 00:05:44.942 "nvme_io": false 00:05:44.942 }, 00:05:44.942 "memory_domains": [ 00:05:44.942 { 00:05:44.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.942 "dma_device_type": 2 00:05:44.942 } 00:05:44.942 ], 00:05:44.942 "driver_specific": {} 00:05:44.942 }, 00:05:44.942 { 00:05:44.942 "name": "Passthru0", 00:05:44.942 "aliases": [ 00:05:44.942 "ece39749-ee24-5210-9176-9967104623ee" 00:05:44.942 ], 00:05:44.942 "product_name": "passthru", 00:05:44.942 "block_size": 512, 00:05:44.942 "num_blocks": 16384, 00:05:44.942 "uuid": "ece39749-ee24-5210-9176-9967104623ee", 00:05:44.942 "assigned_rate_limits": { 00:05:44.942 "rw_ios_per_sec": 0, 00:05:44.942 "rw_mbytes_per_sec": 0, 00:05:44.942 "r_mbytes_per_sec": 0, 00:05:44.942 "w_mbytes_per_sec": 0 00:05:44.942 }, 00:05:44.942 "claimed": false, 00:05:44.942 "zoned": false, 00:05:44.942 "supported_io_types": { 00:05:44.942 "read": true, 00:05:44.942 "write": true, 00:05:44.942 "unmap": true, 00:05:44.942 "write_zeroes": true, 00:05:44.942 "flush": true, 00:05:44.942 "reset": true, 00:05:44.942 "compare": false, 00:05:44.942 "compare_and_write": false, 00:05:44.942 "abort": true, 00:05:44.942 "nvme_admin": false, 00:05:44.942 "nvme_io": false 00:05:44.942 }, 00:05:44.942 "memory_domains": [ 00:05:44.942 { 00:05:44.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.942 "dma_device_type": 2 00:05:44.942 } 00:05:44.942 ], 00:05:44.942 "driver_specific": { 00:05:44.942 "passthru": { 00:05:44.942 "name": "Passthru0", 00:05:44.942 "base_bdev_name": "Malloc2" 00:05:44.942 } 00:05:44.942 } 00:05:44.942 } 00:05:44.942 ]' 00:05:44.942 20:00:42 -- rpc/rpc.sh@21 -- # jq length 00:05:44.942 20:00:42 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.942 20:00:42 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.942 20:00:42 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:44.942 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.942 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.942 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.943 20:00:42 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.943 20:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.943 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.943 20:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.943 20:00:42 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.943 20:00:42 -- rpc/rpc.sh@26 -- # jq length 00:05:44.943 20:00:42 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.943 00:05:44.943 real 0m0.271s 00:05:44.943 user 0m0.172s 00:05:44.943 sys 0m0.051s 00:05:44.943 20:00:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.943 20:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:44.943 ************************************ 00:05:44.943 END TEST rpc_daemon_integrity 00:05:44.943 ************************************ 00:05:45.202 20:00:42 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.202 20:00:42 -- rpc/rpc.sh@84 -- # killprocess 2042856 00:05:45.202 20:00:42 -- common/autotest_common.sh@926 -- # '[' -z 2042856 ']' 00:05:45.202 20:00:42 -- common/autotest_common.sh@930 -- # kill -0 2042856 00:05:45.202 20:00:42 -- common/autotest_common.sh@931 -- # uname 00:05:45.202 20:00:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.202 20:00:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2042856 00:05:45.202 20:00:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.202 20:00:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.202 20:00:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2042856' 00:05:45.202 killing process with pid 2042856 00:05:45.202 20:00:42 -- common/autotest_common.sh@945 -- # kill 2042856 00:05:45.202 20:00:42 -- common/autotest_common.sh@950 -- # wait 2042856 00:05:45.769 00:05:45.769 real 0m2.823s 00:05:45.769 user 0m3.440s 00:05:45.769 sys 0m0.902s 00:05:45.769 20:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.769 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.769 ************************************ 00:05:45.769 END TEST rpc 00:05:45.769 ************************************ 00:05:45.769 20:00:43 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.769 20:00:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.769 20:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.769 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.769 ************************************ 00:05:45.769 START TEST rpc_client 00:05:45.769 ************************************ 00:05:45.769 20:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.769 * Looking for test storage... 00:05:45.769 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client 00:05:45.769 20:00:43 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.769 OK 00:05:45.769 20:00:43 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.769 00:05:45.769 real 0m0.120s 00:05:45.769 user 0m0.052s 00:05:45.769 sys 0m0.078s 00:05:45.769 20:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.769 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:45.769 ************************************ 00:05:45.769 END TEST rpc_client 00:05:45.769 ************************************ 00:05:46.028 20:00:43 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.029 20:00:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.029 20:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.029 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.029 ************************************ 00:05:46.029 START TEST json_config 00:05:46.029 ************************************ 00:05:46.029 20:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh 00:05:46.029 20:00:43 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.029 20:00:43 -- nvmf/common.sh@7 -- # uname -s 00:05:46.029 20:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.029 20:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.029 20:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.029 20:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.029 20:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.029 20:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.029 20:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.029 20:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.029 20:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.029 20:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.029 20:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e 00:05:46.029 20:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e 00:05:46.029 20:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.029 20:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.029 20:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.029 20:00:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:05:46.029 20:00:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.029 20:00:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.029 20:00:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.029 20:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@5 -- # export PATH 00:05:46.029 20:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- nvmf/common.sh@46 -- # : 0 00:05:46.029 20:00:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.029 20:00:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.029 20:00:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.029 20:00:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.029 20:00:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.029 20:00:43 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:46.029 20:00:43 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:46.029 20:00:43 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:46.029 20:00:43 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:46.029 20:00:43 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:46.029 WARNING: No tests are enabled so not running JSON configuration tests 00:05:46.029 20:00:43 -- json_config/json_config.sh@27 -- # exit 0 00:05:46.029 00:05:46.029 real 0m0.100s 00:05:46.029 user 0m0.048s 00:05:46.029 sys 0m0.053s 00:05:46.029 20:00:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.029 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.029 ************************************ 00:05:46.029 END TEST json_config 00:05:46.029 ************************************ 00:05:46.029 20:00:43 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.029 20:00:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.029 20:00:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.029 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.029 ************************************ 00:05:46.029 START TEST json_config_extra_key 00:05:46.029 ************************************ 00:05:46.029 20:00:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.029 20:00:43 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.029 20:00:43 -- nvmf/common.sh@7 -- # uname -s 00:05:46.029 20:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.029 20:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.029 20:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.029 20:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.029 20:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.029 20:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.029 20:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.029 20:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.029 20:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.029 20:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.029 20:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e 00:05:46.029 20:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e 00:05:46.029 20:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.029 20:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.029 20:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.029 20:00:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:05:46.029 20:00:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.029 20:00:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.029 20:00:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.029 20:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- paths/export.sh@5 -- # export PATH 00:05:46.029 20:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.029 20:00:43 -- nvmf/common.sh@46 -- # : 0 00:05:46.029 20:00:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:46.029 20:00:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:46.029 20:00:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.029 20:00:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.029 20:00:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:46.029 20:00:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:46.290 INFO: launching applications... 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2043485 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:46.290 Waiting for target to run... 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2043485 /var/tmp/spdk_tgt.sock 00:05:46.290 20:00:43 -- common/autotest_common.sh@819 -- # '[' -z 2043485 ']' 00:05:46.290 20:00:43 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.290 20:00:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.290 20:00:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.290 20:00:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.290 20:00:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.290 20:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:46.290 [2024-04-25 20:00:44.025669] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:46.290 [2024-04-25 20:00:44.025748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043485 ] 00:05:46.290 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.568 [2024-04-25 20:00:44.391926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.568 [2024-04-25 20:00:44.477872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.568 [2024-04-25 20:00:44.478011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.841 [2024-04-25 20:00:44.546701] 'OCF_Core' volume operations registered 00:05:46.841 [2024-04-25 20:00:44.549738] 'OCF_Cache' volume operations registered 00:05:46.841 [2024-04-25 20:00:44.552644] 'OCF Composite' volume operations registered 00:05:46.841 [2024-04-25 20:00:44.555649] 'SPDK_block_device' volume operations registered 00:05:47.100 20:00:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.100 20:00:44 -- common/autotest_common.sh@852 -- # return 0 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:47.100 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:47.100 INFO: shutting down applications... 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2043485 ]] 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2043485 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2043485 00:05:47.100 20:00:44 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:47.668 20:00:45 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:47.668 20:00:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:47.668 20:00:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2043485 00:05:47.668 20:00:45 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2043485 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:48.237 SPDK target shutdown done 00:05:48.237 20:00:45 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:48.237 Success 00:05:48.237 00:05:48.237 real 0m2.103s 00:05:48.237 user 0m1.581s 00:05:48.237 sys 0m0.536s 00:05:48.237 20:00:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.237 20:00:45 -- common/autotest_common.sh@10 -- # set +x 00:05:48.237 ************************************ 00:05:48.237 END TEST json_config_extra_key 00:05:48.237 ************************************ 00:05:48.237 20:00:46 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.237 20:00:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:48.237 20:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:48.237 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.237 ************************************ 00:05:48.237 START TEST alias_rpc 00:05:48.237 ************************************ 00:05:48.237 20:00:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.237 * Looking for test storage... 00:05:48.237 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc 00:05:48.237 20:00:46 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.237 20:00:46 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2043833 00:05:48.237 20:00:46 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2043833 00:05:48.237 20:00:46 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.237 20:00:46 -- common/autotest_common.sh@819 -- # '[' -z 2043833 ']' 00:05:48.237 20:00:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.237 20:00:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.237 20:00:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.237 20:00:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.237 20:00:46 -- common/autotest_common.sh@10 -- # set +x 00:05:48.496 [2024-04-25 20:00:46.183939] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:48.496 [2024-04-25 20:00:46.184024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043833 ] 00:05:48.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.496 [2024-04-25 20:00:46.291044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.496 [2024-04-25 20:00:46.386004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.496 [2024-04-25 20:00:46.386178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.756 [2024-04-25 20:00:46.586053] 'OCF_Core' volume operations registered 00:05:48.756 [2024-04-25 20:00:46.589558] 'OCF_Cache' volume operations registered 00:05:48.756 [2024-04-25 20:00:46.593506] 'OCF Composite' volume operations registered 00:05:48.756 [2024-04-25 20:00:46.597003] 'SPDK_block_device' volume operations registered 00:05:49.323 20:00:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.323 20:00:46 -- common/autotest_common.sh@852 -- # return 0 00:05:49.323 20:00:46 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.323 20:00:47 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2043833 00:05:49.323 20:00:47 -- common/autotest_common.sh@926 -- # '[' -z 2043833 ']' 00:05:49.323 20:00:47 -- common/autotest_common.sh@930 -- # kill -0 2043833 00:05:49.323 20:00:47 -- common/autotest_common.sh@931 -- # uname 00:05:49.323 20:00:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:49.323 20:00:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043833 00:05:49.582 20:00:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:49.582 20:00:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:49.582 20:00:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043833' 00:05:49.582 killing process with pid 2043833 00:05:49.582 20:00:47 -- common/autotest_common.sh@945 -- # kill 2043833 00:05:49.582 20:00:47 -- common/autotest_common.sh@950 -- # wait 2043833 00:05:50.149 00:05:50.149 real 0m1.840s 00:05:50.149 user 0m1.839s 00:05:50.149 sys 0m0.606s 00:05:50.149 20:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.149 20:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.149 ************************************ 00:05:50.149 END TEST alias_rpc 00:05:50.149 ************************************ 00:05:50.149 20:00:47 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:50.149 20:00:47 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.149 20:00:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:50.149 20:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.149 20:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:50.149 ************************************ 00:05:50.149 START TEST spdkcli_tcp 00:05:50.149 ************************************ 00:05:50.149 20:00:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:50.149 * Looking for test storage... 00:05:50.149 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/common.sh 00:05:50.149 20:00:48 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:50.149 20:00:48 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/clear_config.py 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:50.149 20:00:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.149 20:00:48 -- common/autotest_common.sh@10 -- # set +x 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2044125 00:05:50.149 20:00:48 -- spdkcli/tcp.sh@27 -- # waitforlisten 2044125 00:05:50.150 20:00:48 -- common/autotest_common.sh@819 -- # '[' -z 2044125 ']' 00:05:50.150 20:00:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.150 20:00:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.150 20:00:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.150 20:00:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.150 20:00:48 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:50.150 20:00:48 -- common/autotest_common.sh@10 -- # set +x 00:05:50.150 [2024-04-25 20:00:48.077101] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:50.150 [2024-04-25 20:00:48.077177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044125 ] 00:05:50.409 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.409 [2024-04-25 20:00:48.183460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.409 [2024-04-25 20:00:48.283879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:50.409 [2024-04-25 20:00:48.284100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.409 [2024-04-25 20:00:48.284106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.667 [2024-04-25 20:00:48.484516] 'OCF_Core' volume operations registered 00:05:50.667 [2024-04-25 20:00:48.488006] 'OCF_Cache' volume operations registered 00:05:50.667 [2024-04-25 20:00:48.491951] 'OCF Composite' volume operations registered 00:05:50.667 [2024-04-25 20:00:48.495432] 'SPDK_block_device' volume operations registered 00:05:51.233 20:00:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.233 20:00:48 -- common/autotest_common.sh@852 -- # return 0 00:05:51.233 20:00:48 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:51.233 20:00:48 -- spdkcli/tcp.sh@31 -- # socat_pid=2044304 00:05:51.233 20:00:48 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:51.504 [ 00:05:51.504 "bdev_malloc_delete", 00:05:51.504 "bdev_malloc_create", 00:05:51.504 "bdev_null_resize", 00:05:51.504 "bdev_null_delete", 00:05:51.504 "bdev_null_create", 00:05:51.504 "bdev_nvme_cuse_unregister", 00:05:51.504 "bdev_nvme_cuse_register", 00:05:51.504 "bdev_opal_new_user", 00:05:51.504 "bdev_opal_set_lock_state", 00:05:51.504 "bdev_opal_delete", 00:05:51.505 "bdev_opal_get_info", 00:05:51.505 "bdev_opal_create", 00:05:51.505 "bdev_nvme_opal_revert", 00:05:51.505 "bdev_nvme_opal_init", 00:05:51.505 "bdev_nvme_send_cmd", 00:05:51.505 "bdev_nvme_get_path_iostat", 00:05:51.505 "bdev_nvme_get_mdns_discovery_info", 00:05:51.505 "bdev_nvme_stop_mdns_discovery", 00:05:51.505 "bdev_nvme_start_mdns_discovery", 00:05:51.505 "bdev_nvme_set_multipath_policy", 00:05:51.505 "bdev_nvme_set_preferred_path", 00:05:51.505 "bdev_nvme_get_io_paths", 00:05:51.505 "bdev_nvme_remove_error_injection", 00:05:51.505 "bdev_nvme_add_error_injection", 00:05:51.505 "bdev_nvme_get_discovery_info", 00:05:51.505 "bdev_nvme_stop_discovery", 00:05:51.505 "bdev_nvme_start_discovery", 00:05:51.505 "bdev_nvme_get_controller_health_info", 00:05:51.505 "bdev_nvme_disable_controller", 00:05:51.505 "bdev_nvme_enable_controller", 00:05:51.505 "bdev_nvme_reset_controller", 00:05:51.505 "bdev_nvme_get_transport_statistics", 00:05:51.505 "bdev_nvme_apply_firmware", 00:05:51.505 "bdev_nvme_detach_controller", 00:05:51.505 "bdev_nvme_get_controllers", 00:05:51.505 "bdev_nvme_attach_controller", 00:05:51.505 "bdev_nvme_set_hotplug", 00:05:51.505 "bdev_nvme_set_options", 00:05:51.505 "bdev_passthru_delete", 00:05:51.505 "bdev_passthru_create", 00:05:51.505 "bdev_lvol_grow_lvstore", 00:05:51.505 "bdev_lvol_get_lvols", 00:05:51.505 "bdev_lvol_get_lvstores", 00:05:51.505 "bdev_lvol_delete", 00:05:51.505 "bdev_lvol_set_read_only", 00:05:51.505 "bdev_lvol_resize", 00:05:51.505 "bdev_lvol_decouple_parent", 00:05:51.505 "bdev_lvol_inflate", 00:05:51.505 "bdev_lvol_rename", 00:05:51.505 "bdev_lvol_clone_bdev", 00:05:51.505 "bdev_lvol_clone", 00:05:51.505 "bdev_lvol_snapshot", 00:05:51.505 "bdev_lvol_create", 00:05:51.505 "bdev_lvol_delete_lvstore", 00:05:51.505 "bdev_lvol_rename_lvstore", 00:05:51.505 "bdev_lvol_create_lvstore", 00:05:51.505 "bdev_raid_set_options", 00:05:51.505 "bdev_raid_remove_base_bdev", 00:05:51.505 "bdev_raid_add_base_bdev", 00:05:51.505 "bdev_raid_delete", 00:05:51.505 "bdev_raid_create", 00:05:51.505 "bdev_raid_get_bdevs", 00:05:51.505 "bdev_error_inject_error", 00:05:51.505 "bdev_error_delete", 00:05:51.505 "bdev_error_create", 00:05:51.505 "bdev_split_delete", 00:05:51.505 "bdev_split_create", 00:05:51.505 "bdev_delay_delete", 00:05:51.505 "bdev_delay_create", 00:05:51.505 "bdev_delay_update_latency", 00:05:51.505 "bdev_zone_block_delete", 00:05:51.505 "bdev_zone_block_create", 00:05:51.505 "blobfs_create", 00:05:51.505 "blobfs_detect", 00:05:51.505 "blobfs_set_cache_size", 00:05:51.505 "bdev_ocf_flush_status", 00:05:51.505 "bdev_ocf_flush_start", 00:05:51.505 "bdev_ocf_set_seqcutoff", 00:05:51.505 "bdev_ocf_set_cache_mode", 00:05:51.505 "bdev_ocf_get_bdevs", 00:05:51.505 "bdev_ocf_reset_stats", 00:05:51.505 "bdev_ocf_get_stats", 00:05:51.505 "bdev_ocf_delete", 00:05:51.505 "bdev_ocf_create", 00:05:51.505 "bdev_aio_delete", 00:05:51.505 "bdev_aio_rescan", 00:05:51.505 "bdev_aio_create", 00:05:51.505 "bdev_ftl_set_property", 00:05:51.505 "bdev_ftl_get_properties", 00:05:51.505 "bdev_ftl_get_stats", 00:05:51.505 "bdev_ftl_unmap", 00:05:51.505 "bdev_ftl_unload", 00:05:51.505 "bdev_ftl_delete", 00:05:51.505 "bdev_ftl_load", 00:05:51.505 "bdev_ftl_create", 00:05:51.505 "bdev_virtio_attach_controller", 00:05:51.505 "bdev_virtio_scsi_get_devices", 00:05:51.505 "bdev_virtio_detach_controller", 00:05:51.505 "bdev_virtio_blk_set_hotplug", 00:05:51.505 "bdev_iscsi_delete", 00:05:51.505 "bdev_iscsi_create", 00:05:51.505 "bdev_iscsi_set_options", 00:05:51.505 "accel_error_inject_error", 00:05:51.505 "ioat_scan_accel_module", 00:05:51.505 "dsa_scan_accel_module", 00:05:51.505 "iaa_scan_accel_module", 00:05:51.505 "iscsi_set_options", 00:05:51.505 "iscsi_get_auth_groups", 00:05:51.505 "iscsi_auth_group_remove_secret", 00:05:51.505 "iscsi_auth_group_add_secret", 00:05:51.505 "iscsi_delete_auth_group", 00:05:51.505 "iscsi_create_auth_group", 00:05:51.505 "iscsi_set_discovery_auth", 00:05:51.505 "iscsi_get_options", 00:05:51.505 "iscsi_target_node_request_logout", 00:05:51.505 "iscsi_target_node_set_redirect", 00:05:51.505 "iscsi_target_node_set_auth", 00:05:51.505 "iscsi_target_node_add_lun", 00:05:51.505 "iscsi_get_connections", 00:05:51.505 "iscsi_portal_group_set_auth", 00:05:51.505 "iscsi_start_portal_group", 00:05:51.505 "iscsi_delete_portal_group", 00:05:51.505 "iscsi_create_portal_group", 00:05:51.505 "iscsi_get_portal_groups", 00:05:51.505 "iscsi_delete_target_node", 00:05:51.505 "iscsi_target_node_remove_pg_ig_maps", 00:05:51.505 "iscsi_target_node_add_pg_ig_maps", 00:05:51.505 "iscsi_create_target_node", 00:05:51.505 "iscsi_get_target_nodes", 00:05:51.505 "iscsi_delete_initiator_group", 00:05:51.505 "iscsi_initiator_group_remove_initiators", 00:05:51.505 "iscsi_initiator_group_add_initiators", 00:05:51.505 "iscsi_create_initiator_group", 00:05:51.505 "iscsi_get_initiator_groups", 00:05:51.505 "nvmf_set_crdt", 00:05:51.505 "nvmf_set_config", 00:05:51.505 "nvmf_set_max_subsystems", 00:05:51.505 "nvmf_subsystem_get_listeners", 00:05:51.505 "nvmf_subsystem_get_qpairs", 00:05:51.505 "nvmf_subsystem_get_controllers", 00:05:51.505 "nvmf_get_stats", 00:05:51.505 "nvmf_get_transports", 00:05:51.505 "nvmf_create_transport", 00:05:51.505 "nvmf_get_targets", 00:05:51.505 "nvmf_delete_target", 00:05:51.505 "nvmf_create_target", 00:05:51.505 "nvmf_subsystem_allow_any_host", 00:05:51.505 "nvmf_subsystem_remove_host", 00:05:51.505 "nvmf_subsystem_add_host", 00:05:51.505 "nvmf_subsystem_remove_ns", 00:05:51.505 "nvmf_subsystem_add_ns", 00:05:51.505 "nvmf_subsystem_listener_set_ana_state", 00:05:51.505 "nvmf_discovery_get_referrals", 00:05:51.505 "nvmf_discovery_remove_referral", 00:05:51.505 "nvmf_discovery_add_referral", 00:05:51.505 "nvmf_subsystem_remove_listener", 00:05:51.505 "nvmf_subsystem_add_listener", 00:05:51.505 "nvmf_delete_subsystem", 00:05:51.505 "nvmf_create_subsystem", 00:05:51.505 "nvmf_get_subsystems", 00:05:51.505 "env_dpdk_get_mem_stats", 00:05:51.505 "nbd_get_disks", 00:05:51.505 "nbd_stop_disk", 00:05:51.505 "nbd_start_disk", 00:05:51.505 "ublk_recover_disk", 00:05:51.505 "ublk_get_disks", 00:05:51.505 "ublk_stop_disk", 00:05:51.505 "ublk_start_disk", 00:05:51.505 "ublk_destroy_target", 00:05:51.505 "ublk_create_target", 00:05:51.505 "virtio_blk_create_transport", 00:05:51.505 "virtio_blk_get_transports", 00:05:51.505 "vhost_controller_set_coalescing", 00:05:51.505 "vhost_get_controllers", 00:05:51.505 "vhost_delete_controller", 00:05:51.505 "vhost_create_blk_controller", 00:05:51.505 "vhost_scsi_controller_remove_target", 00:05:51.505 "vhost_scsi_controller_add_target", 00:05:51.505 "vhost_start_scsi_controller", 00:05:51.505 "vhost_create_scsi_controller", 00:05:51.505 "thread_set_cpumask", 00:05:51.505 "framework_get_scheduler", 00:05:51.505 "framework_set_scheduler", 00:05:51.505 "framework_get_reactors", 00:05:51.505 "thread_get_io_channels", 00:05:51.505 "thread_get_pollers", 00:05:51.505 "thread_get_stats", 00:05:51.505 "framework_monitor_context_switch", 00:05:51.505 "spdk_kill_instance", 00:05:51.505 "log_enable_timestamps", 00:05:51.505 "log_get_flags", 00:05:51.505 "log_clear_flag", 00:05:51.505 "log_set_flag", 00:05:51.505 "log_get_level", 00:05:51.505 "log_set_level", 00:05:51.505 "log_get_print_level", 00:05:51.505 "log_set_print_level", 00:05:51.505 "framework_enable_cpumask_locks", 00:05:51.505 "framework_disable_cpumask_locks", 00:05:51.505 "framework_wait_init", 00:05:51.505 "framework_start_init", 00:05:51.505 "scsi_get_devices", 00:05:51.505 "bdev_get_histogram", 00:05:51.505 "bdev_enable_histogram", 00:05:51.505 "bdev_set_qos_limit", 00:05:51.505 "bdev_set_qd_sampling_period", 00:05:51.505 "bdev_get_bdevs", 00:05:51.505 "bdev_reset_iostat", 00:05:51.505 "bdev_get_iostat", 00:05:51.505 "bdev_examine", 00:05:51.505 "bdev_wait_for_examine", 00:05:51.505 "bdev_set_options", 00:05:51.505 "notify_get_notifications", 00:05:51.505 "notify_get_types", 00:05:51.505 "accel_get_stats", 00:05:51.505 "accel_set_options", 00:05:51.505 "accel_set_driver", 00:05:51.505 "accel_crypto_key_destroy", 00:05:51.505 "accel_crypto_keys_get", 00:05:51.505 "accel_crypto_key_create", 00:05:51.505 "accel_assign_opc", 00:05:51.505 "accel_get_module_info", 00:05:51.505 "accel_get_opc_assignments", 00:05:51.505 "vmd_rescan", 00:05:51.505 "vmd_remove_device", 00:05:51.505 "vmd_enable", 00:05:51.505 "sock_set_default_impl", 00:05:51.505 "sock_impl_set_options", 00:05:51.505 "sock_impl_get_options", 00:05:51.505 "iobuf_get_stats", 00:05:51.505 "iobuf_set_options", 00:05:51.505 "framework_get_pci_devices", 00:05:51.505 "framework_get_config", 00:05:51.505 "framework_get_subsystems", 00:05:51.505 "trace_get_info", 00:05:51.505 "trace_get_tpoint_group_mask", 00:05:51.505 "trace_disable_tpoint_group", 00:05:51.505 "trace_enable_tpoint_group", 00:05:51.505 "trace_clear_tpoint_mask", 00:05:51.505 "trace_set_tpoint_mask", 00:05:51.505 "spdk_get_version", 00:05:51.505 "rpc_get_methods" 00:05:51.505 ] 00:05:51.505 20:00:49 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:51.505 20:00:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.505 20:00:49 -- common/autotest_common.sh@10 -- # set +x 00:05:51.505 20:00:49 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:51.505 20:00:49 -- spdkcli/tcp.sh@38 -- # killprocess 2044125 00:05:51.505 20:00:49 -- common/autotest_common.sh@926 -- # '[' -z 2044125 ']' 00:05:51.506 20:00:49 -- common/autotest_common.sh@930 -- # kill -0 2044125 00:05:51.506 20:00:49 -- common/autotest_common.sh@931 -- # uname 00:05:51.506 20:00:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.506 20:00:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044125 00:05:51.506 20:00:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.506 20:00:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.506 20:00:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044125' 00:05:51.506 killing process with pid 2044125 00:05:51.506 20:00:49 -- common/autotest_common.sh@945 -- # kill 2044125 00:05:51.506 20:00:49 -- common/autotest_common.sh@950 -- # wait 2044125 00:05:52.074 00:05:52.074 real 0m1.985s 00:05:52.074 user 0m3.585s 00:05:52.074 sys 0m0.652s 00:05:52.074 20:00:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.074 20:00:49 -- common/autotest_common.sh@10 -- # set +x 00:05:52.074 ************************************ 00:05:52.074 END TEST spdkcli_tcp 00:05:52.074 ************************************ 00:05:52.074 20:00:49 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:52.074 20:00:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.074 20:00:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.074 20:00:49 -- common/autotest_common.sh@10 -- # set +x 00:05:52.074 ************************************ 00:05:52.074 START TEST dpdk_mem_utility 00:05:52.074 ************************************ 00:05:52.074 20:00:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:52.332 * Looking for test storage... 00:05:52.332 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility 00:05:52.332 20:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:52.332 20:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2044417 00:05:52.332 20:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2044417 00:05:52.332 20:00:50 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.332 20:00:50 -- common/autotest_common.sh@819 -- # '[' -z 2044417 ']' 00:05:52.332 20:00:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.332 20:00:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.332 20:00:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.332 20:00:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.332 20:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:52.332 [2024-04-25 20:00:50.109997] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:52.332 [2024-04-25 20:00:50.110082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044417 ] 00:05:52.332 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.332 [2024-04-25 20:00:50.219249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.591 [2024-04-25 20:00:50.319231] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.591 [2024-04-25 20:00:50.319394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.591 [2024-04-25 20:00:50.503845] 'OCF_Core' volume operations registered 00:05:52.591 [2024-04-25 20:00:50.507060] 'OCF_Cache' volume operations registered 00:05:52.591 [2024-04-25 20:00:50.510669] 'OCF Composite' volume operations registered 00:05:52.591 [2024-04-25 20:00:50.513881] 'SPDK_block_device' volume operations registered 00:05:53.158 20:00:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.158 20:00:51 -- common/autotest_common.sh@852 -- # return 0 00:05:53.158 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:53.158 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:53.158 20:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:53.158 20:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:53.158 { 00:05:53.158 "filename": "/tmp/spdk_mem_dump.txt" 00:05:53.158 } 00:05:53.158 20:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:53.158 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.418 DPDK memory size 1198.000000 MiB in 1 heap(s) 00:05:53.418 1 heaps totaling size 1198.000000 MiB 00:05:53.418 size: 1198.000000 MiB heap id: 0 00:05:53.418 end heaps---------- 00:05:53.418 26 mempools totaling size 954.459290 MiB 00:05:53.418 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:53.418 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:53.418 size: 84.521057 MiB name: bdev_io_2044417 00:05:53.418 size: 76.286926 MiB name: ocf_env_12:ocf_mio_8 00:05:53.418 size: 60.174072 MiB name: ocf_env_8:ocf_req_128 00:05:53.418 size: 51.011292 MiB name: evtpool_2044417 00:05:53.418 size: 50.003479 MiB name: msgpool_2044417 00:05:53.418 size: 40.142639 MiB name: ocf_env_11:ocf_mio_4 00:05:53.418 size: 34.164612 MiB name: ocf_env_7:ocf_req_64 00:05:53.418 size: 22.138245 MiB name: ocf_env_6:ocf_req_32 00:05:53.418 size: 22.138245 MiB name: ocf_env_10:ocf_mio_2 00:05:53.418 size: 21.763794 MiB name: PDU_Pool 00:05:53.418 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:53.418 size: 16.136780 MiB name: ocf_env_5:ocf_req_16 00:05:53.418 size: 14.136292 MiB name: ocf_env_4:ocf_req_8 00:05:53.418 size: 14.136292 MiB name: ocf_env_9:ocf_mio_1 00:05:53.418 size: 12.136414 MiB name: ocf_env_3:ocf_req_4 00:05:53.418 size: 10.135315 MiB name: ocf_env_1:ocf_req_1 00:05:53.418 size: 10.135315 MiB name: ocf_env_2:ocf_req_2 00:05:53.418 size: 8.133545 MiB name: ocf_env_17:OCF Composit 00:05:53.418 size: 6.133728 MiB name: ocf_env_16:OCF_Cache 00:05:53.418 size: 6.133728 MiB name: ocf_env_18:SPDK_block_d 00:05:53.418 size: 1.609375 MiB name: ocf_env_15:ocf_mio_64 00:05:53.418 size: 1.310547 MiB name: ocf_env_14:ocf_mio_32 00:05:53.418 size: 1.161133 MiB name: ocf_env_13:ocf_mio_16 00:05:53.418 size: 0.026123 MiB name: Session_Pool 00:05:53.418 end mempools------- 00:05:53.418 6 memzones totaling size 4.142822 MiB 00:05:53.418 size: 1.000366 MiB name: RG_ring_0_2044417 00:05:53.418 size: 1.000366 MiB name: RG_ring_1_2044417 00:05:53.418 size: 1.000366 MiB name: RG_ring_4_2044417 00:05:53.418 size: 1.000366 MiB name: RG_ring_5_2044417 00:05:53.418 size: 0.125366 MiB name: RG_ring_2_2044417 00:05:53.418 size: 0.015991 MiB name: RG_ring_3_2044417 00:05:53.418 end memzones------- 00:05:53.418 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:53.418 heap id: 0 total size: 1198.000000 MiB number of busy elements: 120 number of free elements: 47 00:05:53.418 list of free elements. size: 40.154602 MiB 00:05:53.418 element at address: 0x200030800000 with size: 0.999878 MiB 00:05:53.418 element at address: 0x200030200000 with size: 0.999329 MiB 00:05:53.418 element at address: 0x200030c00000 with size: 0.999329 MiB 00:05:53.418 element at address: 0x20002f800000 with size: 0.998962 MiB 00:05:53.418 element at address: 0x20002f000000 with size: 0.998779 MiB 00:05:53.418 element at address: 0x200018e00000 with size: 0.998718 MiB 00:05:53.418 element at address: 0x200019000000 with size: 0.997375 MiB 00:05:53.418 element at address: 0x200019a00000 with size: 0.997375 MiB 00:05:53.418 element at address: 0x20001b000000 with size: 0.996399 MiB 00:05:53.418 element at address: 0x200024a00000 with size: 0.996399 MiB 00:05:53.418 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:53.418 element at address: 0x20001a400000 with size: 0.996277 MiB 00:05:53.418 element at address: 0x20001be00000 with size: 0.995911 MiB 00:05:53.418 element at address: 0x20001d000000 with size: 0.994446 MiB 00:05:53.418 element at address: 0x200025a00000 with size: 0.994446 MiB 00:05:53.418 element at address: 0x200049c00000 with size: 0.994446 MiB 00:05:53.418 element at address: 0x200027200000 with size: 0.990051 MiB 00:05:53.418 element at address: 0x20001e800000 with size: 0.968079 MiB 00:05:53.418 element at address: 0x20003fa00000 with size: 0.959961 MiB 00:05:53.418 element at address: 0x200020c00000 with size: 0.958374 MiB 00:05:53.418 element at address: 0x200030a00000 with size: 0.936584 MiB 00:05:53.418 element at address: 0x20001ce00000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x20001e600000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x200020a00000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x200024800000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x200025800000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x200027000000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x200029a00000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x20002ee00000 with size: 0.866211 MiB 00:05:53.418 element at address: 0x20002f600000 with size: 0.866211 MiB 00:05:53.419 element at address: 0x200030000000 with size: 0.866211 MiB 00:05:53.419 element at address: 0x200007000000 with size: 0.866089 MiB 00:05:53.419 element at address: 0x20000b200000 with size: 0.866089 MiB 00:05:53.419 element at address: 0x200000400000 with size: 0.865723 MiB 00:05:53.419 element at address: 0x200000800000 with size: 0.863159 MiB 00:05:53.419 element at address: 0x200029c00000 with size: 0.845764 MiB 00:05:53.419 element at address: 0x200013800000 with size: 0.845581 MiB 00:05:53.419 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:53.419 element at address: 0x20002e800000 with size: 0.837769 MiB 00:05:53.419 element at address: 0x20002ea00000 with size: 0.688354 MiB 00:05:53.419 element at address: 0x200032600000 with size: 0.582886 MiB 00:05:53.419 element at address: 0x200030e00000 with size: 0.490845 MiB 00:05:53.419 element at address: 0x200049a00000 with size: 0.490845 MiB 00:05:53.419 element at address: 0x200031000000 with size: 0.485657 MiB 00:05:53.419 element at address: 0x20003fc00000 with size: 0.410034 MiB 00:05:53.419 element at address: 0x20002ec00000 with size: 0.389160 MiB 00:05:53.419 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:53.419 list of standard malloc elements. size: 199.233032 MiB 00:05:53.419 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:53.419 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:53.419 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:53.419 element at address: 0x2000308fff80 with size: 1.000122 MiB 00:05:53.419 element at address: 0x200030afff80 with size: 1.000122 MiB 00:05:53.419 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:53.419 element at address: 0x200030aeff00 with size: 0.062622 MiB 00:05:53.419 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:53.419 element at address: 0x200018effd40 with size: 0.000549 MiB 00:05:53.419 element at address: 0x200030aefdc0 with size: 0.000305 MiB 00:05:53.419 element at address: 0x200018effc40 with size: 0.000244 MiB 00:05:53.419 element at address: 0x200020cf5700 with size: 0.000244 MiB 00:05:53.419 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000004fdc00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000138f8980 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200018effac0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200018effb80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000190ff540 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000190ff600 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000190ff6c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200019aff540 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200019aff600 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200019aff6c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001a4ff0c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001a4ff180 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001a4ff240 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001b0ff140 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001b0ff200 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001b0ff2c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001befef40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001beff000 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001beff0c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001cefde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001d0fe940 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001d0fea00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001d0feac0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001e6fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001e8f7d40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001e8f7e00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20001e8f7ec0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200020afde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200020cf5580 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200020cf5640 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200020cf5800 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000248fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200024aff140 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200024aff200 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200024aff2c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000258fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200025afe940 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200025afea00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200025afeac0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000270fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000272fd740 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000272fd800 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000272fd8c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200029afde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200029cd8840 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200029cd8900 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200029cd89c0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002e8d6780 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002e8d6840 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002e8d6900 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002e8fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002eab0380 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002eab0440 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002eab0500 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002eafde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ec63a00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ec63ac0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ec63b80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ec63c40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ec63d00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002ecfde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002eefde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f0ffb00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f0ffbc0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f0ffc80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f0ffd40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f6fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f8ffbc0 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f8ffc80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f8ffd40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20002f8ffe00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000300fde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000302ffd40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030aefc40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030aefd00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030cffd40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030e7da80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030e7db40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200030efde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x2000310bc740 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200032695380 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200032695440 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fafde00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fc68f80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fc69040 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fc6fc40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fc6fe40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x20003fc6ff00 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200049a7da80 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200049a7db40 with size: 0.000183 MiB 00:05:53.419 element at address: 0x200049afde00 with size: 0.000183 MiB 00:05:53.419 list of memzone associated elements. size: 958.612366 MiB 00:05:53.419 element at address: 0x200032695500 with size: 211.416748 MiB 00:05:53.419 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:53.419 element at address: 0x20003fc6ffc0 with size: 157.562561 MiB 00:05:53.419 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:53.419 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:53.419 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2044417_0 00:05:53.419 element at address: 0x200029cd8a80 with size: 75.153687 MiB 00:05:53.419 associated memzone info: size: 75.153564 MiB name: MP_ocf_env_12:ocf_mio_8_0 00:05:53.419 element at address: 0x200020cf58c0 with size: 59.040833 MiB 00:05:53.419 associated memzone info: size: 59.040710 MiB name: MP_ocf_env_8:ocf_req_128_0 00:05:53.419 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:53.420 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2044417_0 00:05:53.420 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:53.420 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2044417_0 00:05:53.420 element at address: 0x2000272fd980 with size: 39.009399 MiB 00:05:53.420 associated memzone info: size: 39.009277 MiB name: MP_ocf_env_11:ocf_mio_4_0 00:05:53.420 element at address: 0x20001e8f7f80 with size: 33.031372 MiB 00:05:53.420 associated memzone info: size: 33.031250 MiB name: MP_ocf_env_7:ocf_req_64_0 00:05:53.420 element at address: 0x20001d0feb80 with size: 21.005005 MiB 00:05:53.420 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_6:ocf_req_32_0 00:05:53.420 element at address: 0x200025afeb80 with size: 21.005005 MiB 00:05:53.420 associated memzone info: size: 21.004883 MiB name: MP_ocf_env_10:ocf_mio_2_0 00:05:53.420 element at address: 0x2000311be940 with size: 20.255554 MiB 00:05:53.420 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:53.420 element at address: 0x200049dfeb40 with size: 18.005066 MiB 00:05:53.420 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:53.420 element at address: 0x20001beff180 with size: 15.003540 MiB 00:05:53.420 associated memzone info: size: 15.003418 MiB name: MP_ocf_env_5:ocf_req_16_0 00:05:53.420 element at address: 0x20001b0ff380 with size: 13.003052 MiB 00:05:53.420 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_4:ocf_req_8_0 00:05:53.420 element at address: 0x200024aff380 with size: 13.003052 MiB 00:05:53.420 associated memzone info: size: 13.002930 MiB name: MP_ocf_env_9:ocf_mio_1_0 00:05:53.420 element at address: 0x20001a4ff300 with size: 11.003174 MiB 00:05:53.420 associated memzone info: size: 11.003052 MiB name: MP_ocf_env_3:ocf_req_4_0 00:05:53.420 element at address: 0x2000190ff780 with size: 9.002075 MiB 00:05:53.420 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_1:ocf_req_1_0 00:05:53.420 element at address: 0x200019aff780 with size: 9.002075 MiB 00:05:53.420 associated memzone info: size: 9.001953 MiB name: MP_ocf_env_2:ocf_req_2_0 00:05:53.420 element at address: 0x20002f8ffec0 with size: 7.000305 MiB 00:05:53.420 associated memzone info: size: 7.000183 MiB name: MP_ocf_env_17:OCF Composit_0 00:05:53.420 element at address: 0x20002f0ffe00 with size: 5.000488 MiB 00:05:53.420 associated memzone info: size: 5.000366 MiB name: MP_ocf_env_16:OCF_Cache_0 00:05:53.420 element at address: 0x2000302ffe00 with size: 5.000488 MiB 00:05:53.420 associated memzone info: size: 5.000366 MiB name: MP_ocf_env_18:SPDK_block_d_0 00:05:53.420 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:53.420 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2044417 00:05:53.420 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:53.420 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2044417 00:05:53.420 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2044417 00:05:53.420 element at address: 0x2000138f8a40 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_1:ocf_req_1 00:05:53.420 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_2:ocf_req_2 00:05:53.420 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_3:ocf_req_4 00:05:53.420 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_4:ocf_req_8 00:05:53.420 element at address: 0x2000004fdcc0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_5:ocf_req_16 00:05:53.420 element at address: 0x20001cefdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_6:ocf_req_32 00:05:53.420 element at address: 0x20001e6fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_7:ocf_req_64 00:05:53.420 element at address: 0x200020afdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_8:ocf_req_128 00:05:53.420 element at address: 0x2000248fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_9:ocf_mio_1 00:05:53.420 element at address: 0x2000258fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_10:ocf_mio_2 00:05:53.420 element at address: 0x2000270fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_11:ocf_mio_4 00:05:53.420 element at address: 0x200029afdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_12:ocf_mio_8 00:05:53.420 element at address: 0x20002e8fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_13:ocf_mio_16 00:05:53.420 element at address: 0x20002eafdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_14:ocf_mio_32 00:05:53.420 element at address: 0x20002ecfdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_15:ocf_mio_64 00:05:53.420 element at address: 0x20002eefdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_16:OCF_Cache 00:05:53.420 element at address: 0x20002f6fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_17:OCF Composit 00:05:53.420 element at address: 0x2000300fdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_ocf_env_18:SPDK_block_d 00:05:53.420 element at address: 0x200030efdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:53.420 element at address: 0x2000310bc800 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:53.420 element at address: 0x20003fafdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:53.420 element at address: 0x200049afdec0 with size: 1.008118 MiB 00:05:53.420 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:53.420 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:53.420 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2044417 00:05:53.420 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:53.420 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2044417 00:05:53.420 element at address: 0x200030cffe00 with size: 1.000488 MiB 00:05:53.420 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2044417 00:05:53.420 element at address: 0x200049cfe940 with size: 1.000488 MiB 00:05:53.420 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2044417 00:05:53.420 element at address: 0x20002ec63dc0 with size: 0.600891 MiB 00:05:53.420 associated memzone info: size: 0.600769 MiB name: MP_ocf_env_15:ocf_mio_64_0 00:05:53.420 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:53.420 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2044417 00:05:53.420 element at address: 0x200030e7dc00 with size: 0.500488 MiB 00:05:53.420 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:53.420 element at address: 0x200049a7dc00 with size: 0.500488 MiB 00:05:53.420 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:53.420 element at address: 0x20002eab05c0 with size: 0.302063 MiB 00:05:53.420 associated memzone info: size: 0.301941 MiB name: MP_ocf_env_14:ocf_mio_32_0 00:05:53.420 element at address: 0x20003107c540 with size: 0.250488 MiB 00:05:53.420 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:53.420 element at address: 0x20002e8d69c0 with size: 0.152649 MiB 00:05:53.420 associated memzone info: size: 0.152527 MiB name: MP_ocf_env_13:ocf_mio_16_0 00:05:53.420 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2044417 00:05:53.420 element at address: 0x2000138d8780 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_1:ocf_req_1 00:05:53.420 element at address: 0x20000b2ddb80 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_2:ocf_req_2 00:05:53.420 element at address: 0x2000070ddb80 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_3:ocf_req_4 00:05:53.420 element at address: 0x2000008dcf80 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_4:ocf_req_8 00:05:53.420 element at address: 0x2000004dda00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_5:ocf_req_16 00:05:53.420 element at address: 0x20001ceddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_6:ocf_req_32 00:05:53.420 element at address: 0x20001e6ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_7:ocf_req_64 00:05:53.420 element at address: 0x200020addc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_8:ocf_req_128 00:05:53.420 element at address: 0x2000248ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_9:ocf_mio_1 00:05:53.420 element at address: 0x2000258ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_10:ocf_mio_2 00:05:53.420 element at address: 0x2000270ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_11:ocf_mio_4 00:05:53.420 element at address: 0x200029addc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_12:ocf_mio_8 00:05:53.420 element at address: 0x20002eeddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_16:OCF_Cache 00:05:53.420 element at address: 0x20002f6ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_17:OCF Composit 00:05:53.420 element at address: 0x2000300ddc00 with size: 0.125488 MiB 00:05:53.420 associated memzone info: size: 0.125366 MiB name: RG_MP_ocf_env_18:SPDK_block_d 00:05:53.420 element at address: 0x20003faf5c00 with size: 0.031738 MiB 00:05:53.421 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:53.421 element at address: 0x20003fc69100 with size: 0.023743 MiB 00:05:53.421 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:53.421 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:53.421 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2044417 00:05:53.421 element at address: 0x20003fc6f240 with size: 0.002441 MiB 00:05:53.421 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:53.421 element at address: 0x20002e8fdb00 with size: 0.000732 MiB 00:05:53.421 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_13:ocf_mio_16 00:05:53.421 element at address: 0x20002eafdb00 with size: 0.000732 MiB 00:05:53.421 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_14:ocf_mio_32 00:05:53.421 element at address: 0x20002ecfdb00 with size: 0.000732 MiB 00:05:53.421 associated memzone info: size: 0.000610 MiB name: RG_MP_ocf_env_15:ocf_mio_64 00:05:53.421 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:53.421 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2044417 00:05:53.421 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:53.421 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2044417 00:05:53.421 element at address: 0x20003fc6fd00 with size: 0.000305 MiB 00:05:53.421 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:53.421 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:53.421 20:00:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2044417 00:05:53.421 20:00:51 -- common/autotest_common.sh@926 -- # '[' -z 2044417 ']' 00:05:53.421 20:00:51 -- common/autotest_common.sh@930 -- # kill -0 2044417 00:05:53.421 20:00:51 -- common/autotest_common.sh@931 -- # uname 00:05:53.421 20:00:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:53.421 20:00:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044417 00:05:53.421 20:00:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:53.421 20:00:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:53.421 20:00:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044417' 00:05:53.421 killing process with pid 2044417 00:05:53.421 20:00:51 -- common/autotest_common.sh@945 -- # kill 2044417 00:05:53.421 20:00:51 -- common/autotest_common.sh@950 -- # wait 2044417 00:05:53.990 00:05:53.990 real 0m1.854s 00:05:53.990 user 0m1.885s 00:05:53.990 sys 0m0.603s 00:05:53.990 20:00:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.990 20:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 ************************************ 00:05:53.990 END TEST dpdk_mem_utility 00:05:53.990 ************************************ 00:05:53.990 20:00:51 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:05:53.990 20:00:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:53.990 20:00:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.990 20:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:53.990 ************************************ 00:05:53.990 START TEST event 00:05:53.990 ************************************ 00:05:53.990 20:00:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh 00:05:54.249 * Looking for test storage... 00:05:54.249 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:05:54.249 20:00:51 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:54.249 20:00:51 -- bdev/nbd_common.sh@6 -- # set -e 00:05:54.249 20:00:51 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.249 20:00:51 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:54.249 20:00:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.249 20:00:51 -- common/autotest_common.sh@10 -- # set +x 00:05:54.249 ************************************ 00:05:54.249 START TEST event_perf 00:05:54.249 ************************************ 00:05:54.249 20:00:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.249 Running I/O for 1 seconds...[2024-04-25 20:00:51.985036] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:54.249 [2024-04-25 20:00:51.985125] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044786 ] 00:05:54.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.249 [2024-04-25 20:00:52.081451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.249 [2024-04-25 20:00:52.183661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.249 [2024-04-25 20:00:52.183738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.249 [2024-04-25 20:00:52.183838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.249 [2024-04-25 20:00:52.183840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.627 Running I/O for 1 seconds... 00:05:55.627 lcore 0: 164369 00:05:55.627 lcore 1: 164367 00:05:55.627 lcore 2: 164369 00:05:55.627 lcore 3: 164370 00:05:55.627 done. 00:05:55.627 00:05:55.627 real 0m1.337s 00:05:55.627 user 0m4.216s 00:05:55.627 sys 0m0.113s 00:05:55.627 20:00:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.627 20:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.627 ************************************ 00:05:55.628 END TEST event_perf 00:05:55.628 ************************************ 00:05:55.628 20:00:53 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.628 20:00:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:55.628 20:00:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.628 20:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:55.628 ************************************ 00:05:55.628 START TEST event_reactor 00:05:55.628 ************************************ 00:05:55.628 20:00:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:55.628 [2024-04-25 20:00:53.372877] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:55.628 [2024-04-25 20:00:53.372981] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044983 ] 00:05:55.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.628 [2024-04-25 20:00:53.479864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.886 [2024-04-25 20:00:53.580803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.823 test_start 00:05:56.823 oneshot 00:05:56.823 tick 100 00:05:56.823 tick 100 00:05:56.823 tick 250 00:05:56.823 tick 100 00:05:56.823 tick 100 00:05:56.823 tick 100 00:05:56.823 tick 250 00:05:56.823 tick 500 00:05:56.823 tick 100 00:05:56.823 tick 100 00:05:56.823 tick 250 00:05:56.823 tick 100 00:05:56.823 tick 100 00:05:56.823 test_end 00:05:56.823 00:05:56.823 real 0m1.340s 00:05:56.823 user 0m1.221s 00:05:56.823 sys 0m0.112s 00:05:56.823 20:00:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.823 20:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.823 ************************************ 00:05:56.823 END TEST event_reactor 00:05:56.823 ************************************ 00:05:56.823 20:00:54 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.823 20:00:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:56.823 20:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.823 20:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:56.823 ************************************ 00:05:56.823 START TEST event_reactor_perf 00:05:56.823 ************************************ 00:05:56.823 20:00:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:56.823 [2024-04-25 20:00:54.746934] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:56.823 [2024-04-25 20:00:54.747004] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045182 ] 00:05:57.082 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.082 [2024-04-25 20:00:54.851974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.082 [2024-04-25 20:00:54.948832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.460 test_start 00:05:58.460 test_end 00:05:58.460 Performance: 323721 events per second 00:05:58.460 00:05:58.460 real 0m1.327s 00:05:58.460 user 0m1.214s 00:05:58.460 sys 0m0.106s 00:05:58.460 20:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.460 20:00:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.460 ************************************ 00:05:58.460 END TEST event_reactor_perf 00:05:58.460 ************************************ 00:05:58.460 20:00:56 -- event/event.sh@49 -- # uname -s 00:05:58.460 20:00:56 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.460 20:00:56 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.460 20:00:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.460 20:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.460 20:00:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.460 ************************************ 00:05:58.460 START TEST event_scheduler 00:05:58.460 ************************************ 00:05:58.460 20:00:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.460 * Looking for test storage... 00:05:58.460 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler 00:05:58.460 20:00:56 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.460 20:00:56 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2045407 00:05:58.460 20:00:56 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.460 20:00:56 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.460 20:00:56 -- scheduler/scheduler.sh@37 -- # waitforlisten 2045407 00:05:58.460 20:00:56 -- common/autotest_common.sh@819 -- # '[' -z 2045407 ']' 00:05:58.460 20:00:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.460 20:00:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.460 20:00:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.460 20:00:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.460 20:00:56 -- common/autotest_common.sh@10 -- # set +x 00:05:58.460 [2024-04-25 20:00:56.260525] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:05:58.460 [2024-04-25 20:00:56.260608] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045407 ] 00:05:58.460 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.460 [2024-04-25 20:00:56.362790] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.719 [2024-04-25 20:00:56.460829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.719 [2024-04-25 20:00:56.460905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.719 [2024-04-25 20:00:56.461009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.719 [2024-04-25 20:00:56.461008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.287 20:00:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.287 20:00:57 -- common/autotest_common.sh@852 -- # return 0 00:05:59.287 20:00:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.287 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.287 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.287 POWER: Env isn't set yet! 00:05:59.287 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:59.287 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:59.287 POWER: Cannot set governor of lcore 0 to userspace 00:05:59.287 POWER: Attempting to initialise PSTAT power management... 00:05:59.287 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:59.287 POWER: Initialized successfully for lcore 0 power management 00:05:59.287 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:59.287 POWER: Initialized successfully for lcore 1 power management 00:05:59.546 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:59.546 POWER: Initialized successfully for lcore 2 power management 00:05:59.546 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:59.546 POWER: Initialized successfully for lcore 3 power management 00:05:59.546 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.546 20:00:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.546 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.546 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 [2024-04-25 20:00:57.433744] 'OCF_Core' volume operations registered 00:05:59.546 [2024-04-25 20:00:57.437183] 'OCF_Cache' volume operations registered 00:05:59.546 [2024-04-25 20:00:57.441081] 'OCF Composite' volume operations registered 00:05:59.546 [2024-04-25 20:00:57.444525] 'SPDK_block_device' volume operations registered 00:05:59.546 [2024-04-25 20:00:57.445593] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.546 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.546 20:00:57 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.546 20:00:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:59.546 20:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.546 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 ************************************ 00:05:59.546 START TEST scheduler_create_thread 00:05:59.546 ************************************ 00:05:59.546 20:00:57 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:59.546 20:00:57 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.546 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.546 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.546 2 00:05:59.546 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.546 20:00:57 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.546 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.546 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 3 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 4 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 5 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 6 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 7 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 8 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 9 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 10 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:59.805 20:00:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:59.805 20:00:57 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:59.805 20:00:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:59.805 20:00:57 -- common/autotest_common.sh@10 -- # set +x 00:06:00.373 20:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:00.373 20:00:58 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:00.373 20:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:00.373 20:00:58 -- common/autotest_common.sh@10 -- # set +x 00:06:01.751 20:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.751 20:00:59 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:01.751 20:00:59 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:01.751 20:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.751 20:00:59 -- common/autotest_common.sh@10 -- # set +x 00:06:02.687 20:01:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.687 00:06:02.687 real 0m3.103s 00:06:02.687 user 0m0.023s 00:06:02.687 sys 0m0.008s 00:06:02.687 20:01:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.687 20:01:00 -- common/autotest_common.sh@10 -- # set +x 00:06:02.687 ************************************ 00:06:02.687 END TEST scheduler_create_thread 00:06:02.687 ************************************ 00:06:02.687 20:01:00 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:02.687 20:01:00 -- scheduler/scheduler.sh@46 -- # killprocess 2045407 00:06:02.687 20:01:00 -- common/autotest_common.sh@926 -- # '[' -z 2045407 ']' 00:06:02.687 20:01:00 -- common/autotest_common.sh@930 -- # kill -0 2045407 00:06:02.687 20:01:00 -- common/autotest_common.sh@931 -- # uname 00:06:02.687 20:01:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.687 20:01:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2045407 00:06:02.945 20:01:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:02.945 20:01:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:02.945 20:01:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2045407' 00:06:02.945 killing process with pid 2045407 00:06:02.945 20:01:00 -- common/autotest_common.sh@945 -- # kill 2045407 00:06:02.945 20:01:00 -- common/autotest_common.sh@950 -- # wait 2045407 00:06:03.203 [2024-04-25 20:01:00.937837] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:03.462 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:03.462 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:03.462 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:03.462 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:03.462 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:03.462 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:03.462 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:03.462 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:03.462 00:06:03.462 real 0m5.244s 00:06:03.462 user 0m10.420s 00:06:03.462 sys 0m0.533s 00:06:03.462 20:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.462 20:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.462 ************************************ 00:06:03.462 END TEST event_scheduler 00:06:03.462 ************************************ 00:06:03.721 20:01:01 -- event/event.sh@51 -- # modprobe -n nbd 00:06:03.721 20:01:01 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:03.721 20:01:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.721 20:01:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.721 20:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.721 ************************************ 00:06:03.721 START TEST app_repeat 00:06:03.721 ************************************ 00:06:03.721 20:01:01 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:03.721 20:01:01 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.721 20:01:01 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.721 20:01:01 -- event/event.sh@13 -- # local nbd_list 00:06:03.721 20:01:01 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:03.721 20:01:01 -- event/event.sh@14 -- # local bdev_list 00:06:03.721 20:01:01 -- event/event.sh@15 -- # local repeat_times=4 00:06:03.721 20:01:01 -- event/event.sh@17 -- # modprobe nbd 00:06:03.721 20:01:01 -- event/event.sh@19 -- # repeat_pid=2046170 00:06:03.721 20:01:01 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:03.721 20:01:01 -- event/event.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:03.721 20:01:01 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2046170' 00:06:03.721 Process app_repeat pid: 2046170 00:06:03.721 20:01:01 -- event/event.sh@23 -- # for i in {0..2} 00:06:03.722 20:01:01 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:03.722 spdk_app_start Round 0 00:06:03.722 20:01:01 -- event/event.sh@25 -- # waitforlisten 2046170 /var/tmp/spdk-nbd.sock 00:06:03.722 20:01:01 -- common/autotest_common.sh@819 -- # '[' -z 2046170 ']' 00:06:03.722 20:01:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:03.722 20:01:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.722 20:01:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:03.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:03.722 20:01:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.722 20:01:01 -- common/autotest_common.sh@10 -- # set +x 00:06:03.722 [2024-04-25 20:01:01.452585] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:03.722 [2024-04-25 20:01:01.452676] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046170 ] 00:06:03.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.722 [2024-04-25 20:01:01.547707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.722 [2024-04-25 20:01:01.647487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.722 [2024-04-25 20:01:01.647492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.698 20:01:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.698 20:01:02 -- common/autotest_common.sh@852 -- # return 0 00:06:04.698 20:01:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.698 Malloc0 00:06:04.958 20:01:02 -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.958 Malloc1 00:06:05.217 20:01:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@12 -- # local i 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.217 20:01:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.217 /dev/nbd0 00:06:05.217 20:01:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.217 20:01:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.217 20:01:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:05.217 20:01:03 -- common/autotest_common.sh@857 -- # local i 00:06:05.217 20:01:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.217 20:01:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.217 20:01:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:05.217 20:01:03 -- common/autotest_common.sh@861 -- # break 00:06:05.217 20:01:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.217 20:01:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.217 20:01:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.217 1+0 records in 00:06:05.217 1+0 records out 00:06:05.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257099 s, 15.9 MB/s 00:06:05.217 20:01:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:05.217 20:01:03 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.217 20:01:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:05.217 20:01:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.217 20:01:03 -- common/autotest_common.sh@877 -- # return 0 00:06:05.217 20:01:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.217 20:01:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.217 20:01:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.476 /dev/nbd1 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.476 20:01:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:05.476 20:01:03 -- common/autotest_common.sh@857 -- # local i 00:06:05.476 20:01:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:05.476 20:01:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:05.476 20:01:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:05.476 20:01:03 -- common/autotest_common.sh@861 -- # break 00:06:05.476 20:01:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:05.476 20:01:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:05.476 20:01:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.476 1+0 records in 00:06:05.476 1+0 records out 00:06:05.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271127 s, 15.1 MB/s 00:06:05.476 20:01:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:05.476 20:01:03 -- common/autotest_common.sh@874 -- # size=4096 00:06:05.476 20:01:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:05.476 20:01:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:05.476 20:01:03 -- common/autotest_common.sh@877 -- # return 0 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.476 20:01:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.734 20:01:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.735 { 00:06:05.735 "nbd_device": "/dev/nbd0", 00:06:05.735 "bdev_name": "Malloc0" 00:06:05.735 }, 00:06:05.735 { 00:06:05.735 "nbd_device": "/dev/nbd1", 00:06:05.735 "bdev_name": "Malloc1" 00:06:05.735 } 00:06:05.735 ]' 00:06:05.735 20:01:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.735 { 00:06:05.735 "nbd_device": "/dev/nbd0", 00:06:05.735 "bdev_name": "Malloc0" 00:06:05.735 }, 00:06:05.735 { 00:06:05.735 "nbd_device": "/dev/nbd1", 00:06:05.735 "bdev_name": "Malloc1" 00:06:05.735 } 00:06:05.735 ]' 00:06:05.735 20:01:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.994 /dev/nbd1' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.994 /dev/nbd1' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.994 256+0 records in 00:06:05.994 256+0 records out 00:06:05.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010405 s, 101 MB/s 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.994 256+0 records in 00:06:05.994 256+0 records out 00:06:05.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199494 s, 52.6 MB/s 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.994 256+0 records in 00:06:05.994 256+0 records out 00:06:05.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214275 s, 48.9 MB/s 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@51 -- # local i 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.994 20:01:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.253 20:01:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.253 20:01:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.253 20:01:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.253 20:01:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@41 -- # break 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.254 20:01:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@41 -- # break 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.513 20:01:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@65 -- # true 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.772 20:01:04 -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.772 20:01:04 -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.032 20:01:04 -- event/event.sh@35 -- # sleep 3 00:06:07.291 [2024-04-25 20:01:05.091073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.291 [2024-04-25 20:01:05.190380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.291 [2024-04-25 20:01:05.190384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.550 [2024-04-25 20:01:05.237592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.550 [2024-04-25 20:01:05.237642] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.085 20:01:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:10.085 20:01:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.085 spdk_app_start Round 1 00:06:10.085 20:01:07 -- event/event.sh@25 -- # waitforlisten 2046170 /var/tmp/spdk-nbd.sock 00:06:10.085 20:01:07 -- common/autotest_common.sh@819 -- # '[' -z 2046170 ']' 00:06:10.085 20:01:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.085 20:01:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:10.085 20:01:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.085 20:01:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:10.085 20:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:10.371 20:01:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.371 20:01:08 -- common/autotest_common.sh@852 -- # return 0 00:06:10.371 20:01:08 -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.371 Malloc0 00:06:10.631 20:01:08 -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.631 Malloc1 00:06:10.631 20:01:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@12 -- # local i 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.631 20:01:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.890 /dev/nbd0 00:06:10.890 20:01:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.890 20:01:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.890 20:01:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:10.890 20:01:08 -- common/autotest_common.sh@857 -- # local i 00:06:10.890 20:01:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:10.890 20:01:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:10.890 20:01:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:10.890 20:01:08 -- common/autotest_common.sh@861 -- # break 00:06:10.890 20:01:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:10.890 20:01:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:10.890 20:01:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.890 1+0 records in 00:06:10.890 1+0 records out 00:06:10.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249225 s, 16.4 MB/s 00:06:10.890 20:01:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:10.890 20:01:08 -- common/autotest_common.sh@874 -- # size=4096 00:06:10.890 20:01:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:11.149 20:01:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:11.149 20:01:08 -- common/autotest_common.sh@877 -- # return 0 00:06:11.149 20:01:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.149 20:01:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.149 20:01:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.149 /dev/nbd1 00:06:11.149 20:01:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.149 20:01:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.149 20:01:09 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:11.149 20:01:09 -- common/autotest_common.sh@857 -- # local i 00:06:11.149 20:01:09 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:11.149 20:01:09 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:11.149 20:01:09 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:11.149 20:01:09 -- common/autotest_common.sh@861 -- # break 00:06:11.149 20:01:09 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:11.149 20:01:09 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:11.149 20:01:09 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.149 1+0 records in 00:06:11.149 1+0 records out 00:06:11.149 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286469 s, 14.3 MB/s 00:06:11.408 20:01:09 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:11.408 20:01:09 -- common/autotest_common.sh@874 -- # size=4096 00:06:11.408 20:01:09 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:11.409 20:01:09 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:11.409 20:01:09 -- common/autotest_common.sh@877 -- # return 0 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.409 { 00:06:11.409 "nbd_device": "/dev/nbd0", 00:06:11.409 "bdev_name": "Malloc0" 00:06:11.409 }, 00:06:11.409 { 00:06:11.409 "nbd_device": "/dev/nbd1", 00:06:11.409 "bdev_name": "Malloc1" 00:06:11.409 } 00:06:11.409 ]' 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.409 { 00:06:11.409 "nbd_device": "/dev/nbd0", 00:06:11.409 "bdev_name": "Malloc0" 00:06:11.409 }, 00:06:11.409 { 00:06:11.409 "nbd_device": "/dev/nbd1", 00:06:11.409 "bdev_name": "Malloc1" 00:06:11.409 } 00:06:11.409 ]' 00:06:11.409 20:01:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.668 /dev/nbd1' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.668 /dev/nbd1' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.668 256+0 records in 00:06:11.668 256+0 records out 00:06:11.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111361 s, 94.2 MB/s 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.668 256+0 records in 00:06:11.668 256+0 records out 00:06:11.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230145 s, 45.6 MB/s 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.668 256+0 records in 00:06:11.668 256+0 records out 00:06:11.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262758 s, 39.9 MB/s 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@51 -- # local i 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.668 20:01:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@41 -- # break 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.927 20:01:09 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:12.186 20:01:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@41 -- # break 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.186 20:01:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@65 -- # true 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@65 -- # count=0 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@104 -- # count=0 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:12.445 20:01:10 -- bdev/nbd_common.sh@109 -- # return 0 00:06:12.445 20:01:10 -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.705 20:01:10 -- event/event.sh@35 -- # sleep 3 00:06:12.964 [2024-04-25 20:01:10.812505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.222 [2024-04-25 20:01:10.908352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.222 [2024-04-25 20:01:10.908356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.222 [2024-04-25 20:01:10.960028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.222 [2024-04-25 20:01:10.960080] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.755 20:01:13 -- event/event.sh@23 -- # for i in {0..2} 00:06:15.756 20:01:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.756 spdk_app_start Round 2 00:06:15.756 20:01:13 -- event/event.sh@25 -- # waitforlisten 2046170 /var/tmp/spdk-nbd.sock 00:06:15.756 20:01:13 -- common/autotest_common.sh@819 -- # '[' -z 2046170 ']' 00:06:15.756 20:01:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.756 20:01:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.756 20:01:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.756 20:01:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.756 20:01:13 -- common/autotest_common.sh@10 -- # set +x 00:06:16.014 20:01:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.014 20:01:13 -- common/autotest_common.sh@852 -- # return 0 00:06:16.014 20:01:13 -- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.273 Malloc0 00:06:16.273 20:01:14 -- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.532 Malloc1 00:06:16.532 20:01:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@12 -- # local i 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.532 20:01:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.792 /dev/nbd0 00:06:16.792 20:01:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.792 20:01:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.792 20:01:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:16.792 20:01:14 -- common/autotest_common.sh@857 -- # local i 00:06:16.792 20:01:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:16.792 20:01:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:16.792 20:01:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:16.792 20:01:14 -- common/autotest_common.sh@861 -- # break 00:06:16.792 20:01:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:16.792 20:01:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:16.792 20:01:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.792 1+0 records in 00:06:16.792 1+0 records out 00:06:16.792 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199181 s, 20.6 MB/s 00:06:16.792 20:01:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:16.792 20:01:14 -- common/autotest_common.sh@874 -- # size=4096 00:06:16.792 20:01:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:16.792 20:01:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:16.792 20:01:14 -- common/autotest_common.sh@877 -- # return 0 00:06:16.792 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.792 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.792 20:01:14 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.051 /dev/nbd1 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.051 20:01:14 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:17.051 20:01:14 -- common/autotest_common.sh@857 -- # local i 00:06:17.051 20:01:14 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.051 20:01:14 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.051 20:01:14 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:17.051 20:01:14 -- common/autotest_common.sh@861 -- # break 00:06:17.051 20:01:14 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.051 20:01:14 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.051 20:01:14 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.051 1+0 records in 00:06:17.051 1+0 records out 00:06:17.051 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238073 s, 17.2 MB/s 00:06:17.051 20:01:14 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:17.051 20:01:14 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.051 20:01:14 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest 00:06:17.051 20:01:14 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.051 20:01:14 -- common/autotest_common.sh@877 -- # return 0 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.051 20:01:14 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.311 { 00:06:17.311 "nbd_device": "/dev/nbd0", 00:06:17.311 "bdev_name": "Malloc0" 00:06:17.311 }, 00:06:17.311 { 00:06:17.311 "nbd_device": "/dev/nbd1", 00:06:17.311 "bdev_name": "Malloc1" 00:06:17.311 } 00:06:17.311 ]' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.311 { 00:06:17.311 "nbd_device": "/dev/nbd0", 00:06:17.311 "bdev_name": "Malloc0" 00:06:17.311 }, 00:06:17.311 { 00:06:17.311 "nbd_device": "/dev/nbd1", 00:06:17.311 "bdev_name": "Malloc1" 00:06:17.311 } 00:06:17.311 ]' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.311 /dev/nbd1' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.311 /dev/nbd1' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.311 256+0 records in 00:06:17.311 256+0 records out 00:06:17.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0112012 s, 93.6 MB/s 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.311 256+0 records in 00:06:17.311 256+0 records out 00:06:17.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296976 s, 35.3 MB/s 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.311 256+0 records in 00:06:17.311 256+0 records out 00:06:17.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194908 s, 53.8 MB/s 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.311 20:01:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@41 -- # break 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.570 20:01:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.828 20:01:15 -- bdev/nbd_common.sh@41 -- # break 00:06:18.086 20:01:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.086 20:01:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.086 20:01:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.086 20:01:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.086 20:01:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.086 20:01:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.086 20:01:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@65 -- # true 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.345 20:01:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.345 20:01:16 -- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.604 20:01:16 -- event/event.sh@35 -- # sleep 3 00:06:18.863 [2024-04-25 20:01:16.545535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.863 [2024-04-25 20:01:16.638096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.863 [2024-04-25 20:01:16.638101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.863 [2024-04-25 20:01:16.689547] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.863 [2024-04-25 20:01:16.689600] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.399 20:01:19 -- event/event.sh@38 -- # waitforlisten 2046170 /var/tmp/spdk-nbd.sock 00:06:21.399 20:01:19 -- common/autotest_common.sh@819 -- # '[' -z 2046170 ']' 00:06:21.399 20:01:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.399 20:01:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.399 20:01:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.399 20:01:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.399 20:01:19 -- common/autotest_common.sh@10 -- # set +x 00:06:21.657 20:01:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:21.657 20:01:19 -- common/autotest_common.sh@852 -- # return 0 00:06:21.657 20:01:19 -- event/event.sh@39 -- # killprocess 2046170 00:06:21.657 20:01:19 -- common/autotest_common.sh@926 -- # '[' -z 2046170 ']' 00:06:21.657 20:01:19 -- common/autotest_common.sh@930 -- # kill -0 2046170 00:06:21.657 20:01:19 -- common/autotest_common.sh@931 -- # uname 00:06:21.657 20:01:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:21.657 20:01:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2046170 00:06:21.657 20:01:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:21.657 20:01:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:21.916 20:01:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2046170' 00:06:21.916 killing process with pid 2046170 00:06:21.916 20:01:19 -- common/autotest_common.sh@945 -- # kill 2046170 00:06:21.916 20:01:19 -- common/autotest_common.sh@950 -- # wait 2046170 00:06:21.916 spdk_app_start is called in Round 0. 00:06:21.916 Shutdown signal received, stop current app iteration 00:06:21.916 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:21.916 spdk_app_start is called in Round 1. 00:06:21.916 Shutdown signal received, stop current app iteration 00:06:21.916 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:21.916 spdk_app_start is called in Round 2. 00:06:21.916 Shutdown signal received, stop current app iteration 00:06:21.916 Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 reinitialization... 00:06:21.916 spdk_app_start is called in Round 3. 00:06:21.916 Shutdown signal received, stop current app iteration 00:06:21.916 20:01:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:21.916 20:01:19 -- event/event.sh@42 -- # return 0 00:06:21.916 00:06:21.916 real 0m18.400s 00:06:21.916 user 0m39.845s 00:06:21.916 sys 0m3.669s 00:06:21.917 20:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.917 20:01:19 -- common/autotest_common.sh@10 -- # set +x 00:06:21.917 ************************************ 00:06:21.917 END TEST app_repeat 00:06:21.917 ************************************ 00:06:22.176 20:01:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.176 20:01:19 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:22.176 20:01:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.176 20:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.176 20:01:19 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 ************************************ 00:06:22.176 START TEST cpu_locks 00:06:22.176 ************************************ 00:06:22.176 20:01:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:22.176 * Looking for test storage... 00:06:22.176 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event 00:06:22.176 20:01:19 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.176 20:01:19 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.176 20:01:19 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.176 20:01:19 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.176 20:01:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:22.176 20:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.176 20:01:19 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 ************************************ 00:06:22.176 START TEST default_locks 00:06:22.176 ************************************ 00:06:22.176 20:01:19 -- common/autotest_common.sh@1104 -- # default_locks 00:06:22.176 20:01:19 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2048891 00:06:22.176 20:01:19 -- event/cpu_locks.sh@47 -- # waitforlisten 2048891 00:06:22.176 20:01:19 -- common/autotest_common.sh@819 -- # '[' -z 2048891 ']' 00:06:22.176 20:01:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.176 20:01:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.176 20:01:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.176 20:01:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.176 20:01:19 -- common/autotest_common.sh@10 -- # set +x 00:06:22.176 20:01:19 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.176 [2024-04-25 20:01:20.000825] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:22.176 [2024-04-25 20:01:20.000904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048891 ] 00:06:22.176 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.176 [2024-04-25 20:01:20.108459] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.435 [2024-04-25 20:01:20.211360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.435 [2024-04-25 20:01:20.211512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.695 [2024-04-25 20:01:20.400600] 'OCF_Core' volume operations registered 00:06:22.695 [2024-04-25 20:01:20.404084] 'OCF_Cache' volume operations registered 00:06:22.695 [2024-04-25 20:01:20.408021] 'OCF Composite' volume operations registered 00:06:22.695 [2024-04-25 20:01:20.411608] 'SPDK_block_device' volume operations registered 00:06:22.954 20:01:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.954 20:01:20 -- common/autotest_common.sh@852 -- # return 0 00:06:22.954 20:01:20 -- event/cpu_locks.sh@49 -- # locks_exist 2048891 00:06:22.954 20:01:20 -- event/cpu_locks.sh@22 -- # lslocks -p 2048891 00:06:22.954 20:01:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.929 lslocks: write error 00:06:23.929 20:01:21 -- event/cpu_locks.sh@50 -- # killprocess 2048891 00:06:23.929 20:01:21 -- common/autotest_common.sh@926 -- # '[' -z 2048891 ']' 00:06:23.929 20:01:21 -- common/autotest_common.sh@930 -- # kill -0 2048891 00:06:23.929 20:01:21 -- common/autotest_common.sh@931 -- # uname 00:06:23.929 20:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.929 20:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2048891 00:06:23.929 20:01:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.929 20:01:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.929 20:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2048891' 00:06:23.929 killing process with pid 2048891 00:06:23.929 20:01:21 -- common/autotest_common.sh@945 -- # kill 2048891 00:06:23.929 20:01:21 -- common/autotest_common.sh@950 -- # wait 2048891 00:06:24.505 20:01:22 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2048891 00:06:24.506 20:01:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.506 20:01:22 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2048891 00:06:24.506 20:01:22 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:24.506 20:01:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.506 20:01:22 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:24.506 20:01:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.506 20:01:22 -- common/autotest_common.sh@643 -- # waitforlisten 2048891 00:06:24.506 20:01:22 -- common/autotest_common.sh@819 -- # '[' -z 2048891 ']' 00:06:24.506 20:01:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.506 20:01:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.506 20:01:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.506 20:01:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.506 20:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.506 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2048891) - No such process 00:06:24.506 ERROR: process (pid: 2048891) is no longer running 00:06:24.506 20:01:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.506 20:01:22 -- common/autotest_common.sh@852 -- # return 1 00:06:24.506 20:01:22 -- common/autotest_common.sh@643 -- # es=1 00:06:24.506 20:01:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.506 20:01:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.506 20:01:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.506 20:01:22 -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.506 20:01:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.506 20:01:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.506 20:01:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.506 00:06:24.506 real 0m2.361s 00:06:24.506 user 0m2.374s 00:06:24.506 sys 0m0.943s 00:06:24.506 20:01:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.506 20:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.506 ************************************ 00:06:24.506 END TEST default_locks 00:06:24.506 ************************************ 00:06:24.506 20:01:22 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.506 20:01:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.506 20:01:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.506 20:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.506 ************************************ 00:06:24.506 START TEST default_locks_via_rpc 00:06:24.506 ************************************ 00:06:24.506 20:01:22 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:24.506 20:01:22 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2049130 00:06:24.506 20:01:22 -- event/cpu_locks.sh@63 -- # waitforlisten 2049130 00:06:24.506 20:01:22 -- common/autotest_common.sh@819 -- # '[' -z 2049130 ']' 00:06:24.506 20:01:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.506 20:01:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.506 20:01:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.506 20:01:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.506 20:01:22 -- common/autotest_common.sh@10 -- # set +x 00:06:24.506 20:01:22 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.506 [2024-04-25 20:01:22.397352] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:24.506 [2024-04-25 20:01:22.397430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049130 ] 00:06:24.765 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.765 [2024-04-25 20:01:22.505041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.765 [2024-04-25 20:01:22.612407] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.765 [2024-04-25 20:01:22.612558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.024 [2024-04-25 20:01:22.820328] 'OCF_Core' volume operations registered 00:06:25.024 [2024-04-25 20:01:22.823881] 'OCF_Cache' volume operations registered 00:06:25.025 [2024-04-25 20:01:22.827797] 'OCF Composite' volume operations registered 00:06:25.025 [2024-04-25 20:01:22.831283] 'SPDK_block_device' volume operations registered 00:06:25.607 20:01:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.607 20:01:23 -- common/autotest_common.sh@852 -- # return 0 00:06:25.607 20:01:23 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.607 20:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.607 20:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 20:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.607 20:01:23 -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.607 20:01:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.607 20:01:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.607 20:01:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.607 20:01:23 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.607 20:01:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.607 20:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 20:01:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.607 20:01:23 -- event/cpu_locks.sh@71 -- # locks_exist 2049130 00:06:25.607 20:01:23 -- event/cpu_locks.sh@22 -- # lslocks -p 2049130 00:06:25.607 20:01:23 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.545 20:01:24 -- event/cpu_locks.sh@73 -- # killprocess 2049130 00:06:26.545 20:01:24 -- common/autotest_common.sh@926 -- # '[' -z 2049130 ']' 00:06:26.545 20:01:24 -- common/autotest_common.sh@930 -- # kill -0 2049130 00:06:26.545 20:01:24 -- common/autotest_common.sh@931 -- # uname 00:06:26.545 20:01:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.545 20:01:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2049130 00:06:26.545 20:01:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.545 20:01:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.545 20:01:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2049130' 00:06:26.545 killing process with pid 2049130 00:06:26.545 20:01:24 -- common/autotest_common.sh@945 -- # kill 2049130 00:06:26.545 20:01:24 -- common/autotest_common.sh@950 -- # wait 2049130 00:06:27.114 00:06:27.114 real 0m2.409s 00:06:27.114 user 0m2.459s 00:06:27.114 sys 0m0.963s 00:06:27.114 20:01:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.114 20:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 ************************************ 00:06:27.114 END TEST default_locks_via_rpc 00:06:27.114 ************************************ 00:06:27.114 20:01:24 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.114 20:01:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.114 20:01:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.114 20:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 ************************************ 00:06:27.114 START TEST non_locking_app_on_locked_coremask 00:06:27.114 ************************************ 00:06:27.114 20:01:24 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:27.114 20:01:24 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2049513 00:06:27.114 20:01:24 -- event/cpu_locks.sh@81 -- # waitforlisten 2049513 /var/tmp/spdk.sock 00:06:27.114 20:01:24 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.114 20:01:24 -- common/autotest_common.sh@819 -- # '[' -z 2049513 ']' 00:06:27.114 20:01:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.114 20:01:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.114 20:01:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.114 20:01:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.114 20:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:27.114 [2024-04-25 20:01:24.844615] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:27.114 [2024-04-25 20:01:24.844702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049513 ] 00:06:27.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.114 [2024-04-25 20:01:24.950040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.374 [2024-04-25 20:01:25.054813] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.374 [2024-04-25 20:01:25.054964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.374 [2024-04-25 20:01:25.243242] 'OCF_Core' volume operations registered 00:06:27.374 [2024-04-25 20:01:25.246464] 'OCF_Cache' volume operations registered 00:06:27.374 [2024-04-25 20:01:25.250057] 'OCF Composite' volume operations registered 00:06:27.374 [2024-04-25 20:01:25.253271] 'SPDK_block_device' volume operations registered 00:06:27.943 20:01:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.943 20:01:25 -- common/autotest_common.sh@852 -- # return 0 00:06:27.943 20:01:25 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2049691 00:06:27.943 20:01:25 -- event/cpu_locks.sh@85 -- # waitforlisten 2049691 /var/tmp/spdk2.sock 00:06:27.943 20:01:25 -- common/autotest_common.sh@819 -- # '[' -z 2049691 ']' 00:06:27.943 20:01:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.943 20:01:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.943 20:01:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.943 20:01:25 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.943 20:01:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.943 20:01:25 -- common/autotest_common.sh@10 -- # set +x 00:06:27.943 [2024-04-25 20:01:25.737858] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:27.943 [2024-04-25 20:01:25.737934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049691 ] 00:06:27.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.202 [2024-04-25 20:01:25.882290] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.202 [2024-04-25 20:01:25.882324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.202 [2024-04-25 20:01:26.078064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.202 [2024-04-25 20:01:26.078221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.770 [2024-04-25 20:01:26.456503] 'OCF_Core' volume operations registered 00:06:28.770 [2024-04-25 20:01:26.463791] 'OCF_Cache' volume operations registered 00:06:28.770 [2024-04-25 20:01:26.471656] 'OCF Composite' volume operations registered 00:06:28.770 [2024-04-25 20:01:26.479037] 'SPDK_block_device' volume operations registered 00:06:29.337 20:01:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.337 20:01:27 -- common/autotest_common.sh@852 -- # return 0 00:06:29.337 20:01:27 -- event/cpu_locks.sh@87 -- # locks_exist 2049513 00:06:29.337 20:01:27 -- event/cpu_locks.sh@22 -- # lslocks -p 2049513 00:06:29.337 20:01:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.242 lslocks: write error 00:06:31.242 20:01:28 -- event/cpu_locks.sh@89 -- # killprocess 2049513 00:06:31.242 20:01:28 -- common/autotest_common.sh@926 -- # '[' -z 2049513 ']' 00:06:31.242 20:01:28 -- common/autotest_common.sh@930 -- # kill -0 2049513 00:06:31.242 20:01:28 -- common/autotest_common.sh@931 -- # uname 00:06:31.242 20:01:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.242 20:01:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2049513 00:06:31.242 20:01:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.242 20:01:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.242 20:01:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2049513' 00:06:31.242 killing process with pid 2049513 00:06:31.242 20:01:28 -- common/autotest_common.sh@945 -- # kill 2049513 00:06:31.242 20:01:28 -- common/autotest_common.sh@950 -- # wait 2049513 00:06:32.178 20:01:29 -- event/cpu_locks.sh@90 -- # killprocess 2049691 00:06:32.178 20:01:29 -- common/autotest_common.sh@926 -- # '[' -z 2049691 ']' 00:06:32.178 20:01:29 -- common/autotest_common.sh@930 -- # kill -0 2049691 00:06:32.178 20:01:29 -- common/autotest_common.sh@931 -- # uname 00:06:32.178 20:01:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.178 20:01:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2049691 00:06:32.178 20:01:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.178 20:01:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.178 20:01:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2049691' 00:06:32.178 killing process with pid 2049691 00:06:32.178 20:01:30 -- common/autotest_common.sh@945 -- # kill 2049691 00:06:32.178 20:01:30 -- common/autotest_common.sh@950 -- # wait 2049691 00:06:32.746 00:06:32.746 real 0m5.742s 00:06:32.746 user 0m6.009s 00:06:32.746 sys 0m1.870s 00:06:32.746 20:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.746 20:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.746 ************************************ 00:06:32.746 END TEST non_locking_app_on_locked_coremask 00:06:32.746 ************************************ 00:06:32.746 20:01:30 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.746 20:01:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.746 20:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.746 20:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.746 ************************************ 00:06:32.746 START TEST locking_app_on_unlocked_coremask 00:06:32.746 ************************************ 00:06:32.746 20:01:30 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:32.746 20:01:30 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2050341 00:06:32.746 20:01:30 -- event/cpu_locks.sh@99 -- # waitforlisten 2050341 /var/tmp/spdk.sock 00:06:32.746 20:01:30 -- common/autotest_common.sh@819 -- # '[' -z 2050341 ']' 00:06:32.746 20:01:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.746 20:01:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.746 20:01:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.746 20:01:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.746 20:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:32.746 20:01:30 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.746 [2024-04-25 20:01:30.645802] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:32.746 [2024-04-25 20:01:30.645878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050341 ] 00:06:33.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.006 [2024-04-25 20:01:30.751502] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.006 [2024-04-25 20:01:30.751542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.006 [2024-04-25 20:01:30.855408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.006 [2024-04-25 20:01:30.855560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.265 [2024-04-25 20:01:31.057732] 'OCF_Core' volume operations registered 00:06:33.265 [2024-04-25 20:01:31.061210] 'OCF_Cache' volume operations registered 00:06:33.265 [2024-04-25 20:01:31.065154] 'OCF Composite' volume operations registered 00:06:33.265 [2024-04-25 20:01:31.068643] 'SPDK_block_device' volume operations registered 00:06:33.831 20:01:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.831 20:01:31 -- common/autotest_common.sh@852 -- # return 0 00:06:33.831 20:01:31 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.831 20:01:31 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2050452 00:06:33.831 20:01:31 -- event/cpu_locks.sh@103 -- # waitforlisten 2050452 /var/tmp/spdk2.sock 00:06:33.831 20:01:31 -- common/autotest_common.sh@819 -- # '[' -z 2050452 ']' 00:06:33.831 20:01:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.831 20:01:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.831 20:01:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.831 20:01:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.831 20:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:33.831 [2024-04-25 20:01:31.612315] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:33.831 [2024-04-25 20:01:31.612387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050452 ] 00:06:33.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.831 [2024-04-25 20:01:31.755761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.090 [2024-04-25 20:01:31.962484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.090 [2024-04-25 20:01:31.962652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.658 [2024-04-25 20:01:32.347681] 'OCF_Core' volume operations registered 00:06:34.658 [2024-04-25 20:01:32.355166] 'OCF_Cache' volume operations registered 00:06:34.658 [2024-04-25 20:01:32.363157] 'OCF Composite' volume operations registered 00:06:34.658 [2024-04-25 20:01:32.370668] 'SPDK_block_device' volume operations registered 00:06:35.593 20:01:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.593 20:01:33 -- common/autotest_common.sh@852 -- # return 0 00:06:35.593 20:01:33 -- event/cpu_locks.sh@105 -- # locks_exist 2050452 00:06:35.593 20:01:33 -- event/cpu_locks.sh@22 -- # lslocks -p 2050452 00:06:35.593 20:01:33 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.126 lslocks: write error 00:06:38.126 20:01:35 -- event/cpu_locks.sh@107 -- # killprocess 2050341 00:06:38.126 20:01:35 -- common/autotest_common.sh@926 -- # '[' -z 2050341 ']' 00:06:38.126 20:01:35 -- common/autotest_common.sh@930 -- # kill -0 2050341 00:06:38.126 20:01:35 -- common/autotest_common.sh@931 -- # uname 00:06:38.126 20:01:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:38.126 20:01:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2050341 00:06:38.126 20:01:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:38.126 20:01:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:38.126 20:01:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2050341' 00:06:38.126 killing process with pid 2050341 00:06:38.126 20:01:35 -- common/autotest_common.sh@945 -- # kill 2050341 00:06:38.126 20:01:35 -- common/autotest_common.sh@950 -- # wait 2050341 00:06:39.063 20:01:36 -- event/cpu_locks.sh@108 -- # killprocess 2050452 00:06:39.063 20:01:36 -- common/autotest_common.sh@926 -- # '[' -z 2050452 ']' 00:06:39.063 20:01:36 -- common/autotest_common.sh@930 -- # kill -0 2050452 00:06:39.063 20:01:36 -- common/autotest_common.sh@931 -- # uname 00:06:39.063 20:01:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.063 20:01:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2050452 00:06:39.063 20:01:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.063 20:01:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.063 20:01:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2050452' 00:06:39.063 killing process with pid 2050452 00:06:39.063 20:01:36 -- common/autotest_common.sh@945 -- # kill 2050452 00:06:39.063 20:01:36 -- common/autotest_common.sh@950 -- # wait 2050452 00:06:39.631 00:06:39.631 real 0m6.744s 00:06:39.631 user 0m7.234s 00:06:39.631 sys 0m2.336s 00:06:39.631 20:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.631 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.631 ************************************ 00:06:39.631 END TEST locking_app_on_unlocked_coremask 00:06:39.631 ************************************ 00:06:39.631 20:01:37 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:39.631 20:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.631 20:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.631 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.631 ************************************ 00:06:39.631 START TEST locking_app_on_locked_coremask 00:06:39.631 ************************************ 00:06:39.631 20:01:37 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:39.631 20:01:37 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2051280 00:06:39.631 20:01:37 -- event/cpu_locks.sh@116 -- # waitforlisten 2051280 /var/tmp/spdk.sock 00:06:39.631 20:01:37 -- common/autotest_common.sh@819 -- # '[' -z 2051280 ']' 00:06:39.631 20:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.631 20:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.631 20:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.631 20:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.631 20:01:37 -- common/autotest_common.sh@10 -- # set +x 00:06:39.631 20:01:37 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.631 [2024-04-25 20:01:37.436027] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:39.631 [2024-04-25 20:01:37.436103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051280 ] 00:06:39.631 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.631 [2024-04-25 20:01:37.542291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.958 [2024-04-25 20:01:37.646717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.958 [2024-04-25 20:01:37.646868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.958 [2024-04-25 20:01:37.831956] 'OCF_Core' volume operations registered 00:06:39.958 [2024-04-25 20:01:37.835155] 'OCF_Cache' volume operations registered 00:06:39.958 [2024-04-25 20:01:37.838797] 'OCF Composite' volume operations registered 00:06:39.958 [2024-04-25 20:01:37.842009] 'SPDK_block_device' volume operations registered 00:06:40.565 20:01:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.565 20:01:38 -- common/autotest_common.sh@852 -- # return 0 00:06:40.565 20:01:38 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.565 20:01:38 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2051390 00:06:40.565 20:01:38 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2051390 /var/tmp/spdk2.sock 00:06:40.565 20:01:38 -- common/autotest_common.sh@640 -- # local es=0 00:06:40.565 20:01:38 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2051390 /var/tmp/spdk2.sock 00:06:40.565 20:01:38 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:40.565 20:01:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.565 20:01:38 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:40.565 20:01:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:40.565 20:01:38 -- common/autotest_common.sh@643 -- # waitforlisten 2051390 /var/tmp/spdk2.sock 00:06:40.565 20:01:38 -- common/autotest_common.sh@819 -- # '[' -z 2051390 ']' 00:06:40.565 20:01:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.565 20:01:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.565 20:01:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.565 20:01:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.565 20:01:38 -- common/autotest_common.sh@10 -- # set +x 00:06:40.565 [2024-04-25 20:01:38.398921] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:40.565 [2024-04-25 20:01:38.398994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051390 ] 00:06:40.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.824 [2024-04-25 20:01:38.541991] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2051280 has claimed it. 00:06:40.824 [2024-04-25 20:01:38.542037] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.391 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2051390) - No such process 00:06:41.391 ERROR: process (pid: 2051390) is no longer running 00:06:41.391 20:01:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.391 20:01:39 -- common/autotest_common.sh@852 -- # return 1 00:06:41.391 20:01:39 -- common/autotest_common.sh@643 -- # es=1 00:06:41.391 20:01:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:41.391 20:01:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:41.391 20:01:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:41.391 20:01:39 -- event/cpu_locks.sh@122 -- # locks_exist 2051280 00:06:41.391 20:01:39 -- event/cpu_locks.sh@22 -- # lslocks -p 2051280 00:06:41.391 20:01:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.328 lslocks: write error 00:06:42.328 20:01:40 -- event/cpu_locks.sh@124 -- # killprocess 2051280 00:06:42.328 20:01:40 -- common/autotest_common.sh@926 -- # '[' -z 2051280 ']' 00:06:42.328 20:01:40 -- common/autotest_common.sh@930 -- # kill -0 2051280 00:06:42.328 20:01:40 -- common/autotest_common.sh@931 -- # uname 00:06:42.328 20:01:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.328 20:01:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2051280 00:06:42.328 20:01:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:42.328 20:01:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:42.328 20:01:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2051280' 00:06:42.328 killing process with pid 2051280 00:06:42.328 20:01:40 -- common/autotest_common.sh@945 -- # kill 2051280 00:06:42.328 20:01:40 -- common/autotest_common.sh@950 -- # wait 2051280 00:06:42.897 00:06:42.897 real 0m3.301s 00:06:42.897 user 0m3.536s 00:06:42.897 sys 0m1.252s 00:06:42.897 20:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.897 20:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 ************************************ 00:06:42.897 END TEST locking_app_on_locked_coremask 00:06:42.897 ************************************ 00:06:42.897 20:01:40 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:42.897 20:01:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.897 20:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.897 20:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 ************************************ 00:06:42.897 START TEST locking_overlapped_coremask 00:06:42.897 ************************************ 00:06:42.897 20:01:40 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:42.897 20:01:40 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2051771 00:06:42.897 20:01:40 -- event/cpu_locks.sh@133 -- # waitforlisten 2051771 /var/tmp/spdk.sock 00:06:42.897 20:01:40 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:42.897 20:01:40 -- common/autotest_common.sh@819 -- # '[' -z 2051771 ']' 00:06:42.897 20:01:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.897 20:01:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.897 20:01:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.897 20:01:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.897 20:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:42.897 [2024-04-25 20:01:40.783535] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:42.897 [2024-04-25 20:01:40.783600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051771 ] 00:06:42.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.157 [2024-04-25 20:01:40.876213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.157 [2024-04-25 20:01:40.979537] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.157 [2024-04-25 20:01:40.979738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.157 [2024-04-25 20:01:40.979838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.157 [2024-04-25 20:01:40.979842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.417 [2024-04-25 20:01:41.176818] 'OCF_Core' volume operations registered 00:06:43.417 [2024-04-25 20:01:41.180301] 'OCF_Cache' volume operations registered 00:06:43.417 [2024-04-25 20:01:41.184240] 'OCF Composite' volume operations registered 00:06:43.417 [2024-04-25 20:01:41.187724] 'SPDK_block_device' volume operations registered 00:06:43.677 20:01:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.677 20:01:41 -- common/autotest_common.sh@852 -- # return 0 00:06:43.677 20:01:41 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2051936 00:06:43.677 20:01:41 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2051936 /var/tmp/spdk2.sock 00:06:43.677 20:01:41 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:43.677 20:01:41 -- common/autotest_common.sh@640 -- # local es=0 00:06:43.677 20:01:41 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2051936 /var/tmp/spdk2.sock 00:06:43.677 20:01:41 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:43.677 20:01:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.677 20:01:41 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:43.677 20:01:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:43.677 20:01:41 -- common/autotest_common.sh@643 -- # waitforlisten 2051936 /var/tmp/spdk2.sock 00:06:43.677 20:01:41 -- common/autotest_common.sh@819 -- # '[' -z 2051936 ']' 00:06:43.677 20:01:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.677 20:01:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.677 20:01:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.677 20:01:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.677 20:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:43.936 [2024-04-25 20:01:41.650152] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:43.936 [2024-04-25 20:01:41.650229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051936 ] 00:06:43.936 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.936 [2024-04-25 20:01:41.765731] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2051771 has claimed it. 00:06:43.936 [2024-04-25 20:01:41.765772] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.505 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2051936) - No such process 00:06:44.505 ERROR: process (pid: 2051936) is no longer running 00:06:44.505 20:01:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.505 20:01:42 -- common/autotest_common.sh@852 -- # return 1 00:06:44.505 20:01:42 -- common/autotest_common.sh@643 -- # es=1 00:06:44.505 20:01:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.505 20:01:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:44.505 20:01:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.505 20:01:42 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:44.505 20:01:42 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:44.505 20:01:42 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:44.505 20:01:42 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:44.505 20:01:42 -- event/cpu_locks.sh@141 -- # killprocess 2051771 00:06:44.505 20:01:42 -- common/autotest_common.sh@926 -- # '[' -z 2051771 ']' 00:06:44.505 20:01:42 -- common/autotest_common.sh@930 -- # kill -0 2051771 00:06:44.505 20:01:42 -- common/autotest_common.sh@931 -- # uname 00:06:44.505 20:01:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.505 20:01:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2051771 00:06:44.505 20:01:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.505 20:01:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.505 20:01:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2051771' 00:06:44.505 killing process with pid 2051771 00:06:44.505 20:01:42 -- common/autotest_common.sh@945 -- # kill 2051771 00:06:44.505 20:01:42 -- common/autotest_common.sh@950 -- # wait 2051771 00:06:45.074 00:06:45.074 real 0m2.223s 00:06:45.074 user 0m5.977s 00:06:45.074 sys 0m0.645s 00:06:45.074 20:01:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.074 20:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.074 ************************************ 00:06:45.074 END TEST locking_overlapped_coremask 00:06:45.074 ************************************ 00:06:45.074 20:01:42 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:45.074 20:01:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.074 20:01:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.074 20:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:45.074 ************************************ 00:06:45.074 START TEST locking_overlapped_coremask_via_rpc 00:06:45.074 ************************************ 00:06:45.074 20:01:43 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:45.074 20:01:43 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2052165 00:06:45.074 20:01:43 -- event/cpu_locks.sh@149 -- # waitforlisten 2052165 /var/tmp/spdk.sock 00:06:45.074 20:01:43 -- common/autotest_common.sh@819 -- # '[' -z 2052165 ']' 00:06:45.074 20:01:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.074 20:01:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.074 20:01:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.074 20:01:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.074 20:01:43 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:45.074 20:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:45.333 [2024-04-25 20:01:43.053700] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:45.333 [2024-04-25 20:01:43.053770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052165 ] 00:06:45.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.333 [2024-04-25 20:01:43.159738] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.333 [2024-04-25 20:01:43.159769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.333 [2024-04-25 20:01:43.255728] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.333 [2024-04-25 20:01:43.255911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.333 [2024-04-25 20:01:43.256002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.333 [2024-04-25 20:01:43.256006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.592 [2024-04-25 20:01:43.453437] 'OCF_Core' volume operations registered 00:06:45.592 [2024-04-25 20:01:43.456913] 'OCF_Cache' volume operations registered 00:06:45.592 [2024-04-25 20:01:43.460862] 'OCF Composite' volume operations registered 00:06:45.592 [2024-04-25 20:01:43.464348] 'SPDK_block_device' volume operations registered 00:06:46.159 20:01:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.159 20:01:43 -- common/autotest_common.sh@852 -- # return 0 00:06:46.159 20:01:43 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2052182 00:06:46.160 20:01:43 -- event/cpu_locks.sh@153 -- # waitforlisten 2052182 /var/tmp/spdk2.sock 00:06:46.160 20:01:43 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:46.160 20:01:43 -- common/autotest_common.sh@819 -- # '[' -z 2052182 ']' 00:06:46.160 20:01:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.160 20:01:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.160 20:01:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.160 20:01:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.160 20:01:43 -- common/autotest_common.sh@10 -- # set +x 00:06:46.160 [2024-04-25 20:01:44.038357] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:46.160 [2024-04-25 20:01:44.038440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052182 ] 00:06:46.160 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.418 [2024-04-25 20:01:44.155410] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:46.418 [2024-04-25 20:01:44.155438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.418 [2024-04-25 20:01:44.323753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:46.418 [2024-04-25 20:01:44.323987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.418 [2024-04-25 20:01:44.324108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.418 [2024-04-25 20:01:44.324110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:46.987 [2024-04-25 20:01:44.673108] 'OCF_Core' volume operations registered 00:06:46.987 [2024-04-25 20:01:44.680057] 'OCF_Cache' volume operations registered 00:06:46.987 [2024-04-25 20:01:44.687406] 'OCF Composite' volume operations registered 00:06:46.987 [2024-04-25 20:01:44.694328] 'SPDK_block_device' volume operations registered 00:06:47.926 20:01:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.926 20:01:45 -- common/autotest_common.sh@852 -- # return 0 00:06:47.926 20:01:45 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.926 20:01:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.926 20:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.926 20:01:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:47.926 20:01:45 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.926 20:01:45 -- common/autotest_common.sh@640 -- # local es=0 00:06:47.926 20:01:45 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.926 20:01:45 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:47.926 20:01:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.926 20:01:45 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:47.926 20:01:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:47.926 20:01:45 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.926 20:01:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:47.926 20:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:47.926 [2024-04-25 20:01:45.657713] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2052165 has claimed it. 00:06:47.926 request: 00:06:47.926 { 00:06:47.926 "method": "framework_enable_cpumask_locks", 00:06:47.926 "req_id": 1 00:06:47.926 } 00:06:47.926 Got JSON-RPC error response 00:06:47.926 response: 00:06:47.926 { 00:06:47.926 "code": -32603, 00:06:47.926 "message": "Failed to claim CPU core: 2" 00:06:47.926 } 00:06:47.926 20:01:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:47.926 20:01:45 -- common/autotest_common.sh@643 -- # es=1 00:06:47.926 20:01:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.926 20:01:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.926 20:01:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.926 20:01:45 -- event/cpu_locks.sh@158 -- # waitforlisten 2052165 /var/tmp/spdk.sock 00:06:47.926 20:01:45 -- common/autotest_common.sh@819 -- # '[' -z 2052165 ']' 00:06:47.926 20:01:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.926 20:01:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.926 20:01:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.926 20:01:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.926 20:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.185 20:01:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.185 20:01:45 -- common/autotest_common.sh@852 -- # return 0 00:06:48.185 20:01:45 -- event/cpu_locks.sh@159 -- # waitforlisten 2052182 /var/tmp/spdk2.sock 00:06:48.185 20:01:45 -- common/autotest_common.sh@819 -- # '[' -z 2052182 ']' 00:06:48.185 20:01:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.185 20:01:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.185 20:01:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.185 20:01:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.185 20:01:45 -- common/autotest_common.sh@10 -- # set +x 00:06:48.185 20:01:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.185 20:01:46 -- common/autotest_common.sh@852 -- # return 0 00:06:48.185 20:01:46 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:48.185 20:01:46 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.185 20:01:46 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.185 20:01:46 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.185 00:06:48.185 real 0m3.104s 00:06:48.185 user 0m1.243s 00:06:48.185 sys 0m0.289s 00:06:48.185 20:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.185 20:01:46 -- common/autotest_common.sh@10 -- # set +x 00:06:48.185 ************************************ 00:06:48.185 END TEST locking_overlapped_coremask_via_rpc 00:06:48.185 ************************************ 00:06:48.445 20:01:46 -- event/cpu_locks.sh@174 -- # cleanup 00:06:48.445 20:01:46 -- event/cpu_locks.sh@15 -- # [[ -z 2052165 ]] 00:06:48.445 20:01:46 -- event/cpu_locks.sh@15 -- # killprocess 2052165 00:06:48.445 20:01:46 -- common/autotest_common.sh@926 -- # '[' -z 2052165 ']' 00:06:48.445 20:01:46 -- common/autotest_common.sh@930 -- # kill -0 2052165 00:06:48.445 20:01:46 -- common/autotest_common.sh@931 -- # uname 00:06:48.445 20:01:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:48.445 20:01:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2052165 00:06:48.445 20:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:48.445 20:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:48.445 20:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2052165' 00:06:48.445 killing process with pid 2052165 00:06:48.445 20:01:46 -- common/autotest_common.sh@945 -- # kill 2052165 00:06:48.445 20:01:46 -- common/autotest_common.sh@950 -- # wait 2052165 00:06:49.014 20:01:46 -- event/cpu_locks.sh@16 -- # [[ -z 2052182 ]] 00:06:49.014 20:01:46 -- event/cpu_locks.sh@16 -- # killprocess 2052182 00:06:49.014 20:01:46 -- common/autotest_common.sh@926 -- # '[' -z 2052182 ']' 00:06:49.014 20:01:46 -- common/autotest_common.sh@930 -- # kill -0 2052182 00:06:49.014 20:01:46 -- common/autotest_common.sh@931 -- # uname 00:06:49.014 20:01:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.014 20:01:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2052182 00:06:49.014 20:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:49.014 20:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:49.014 20:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2052182' 00:06:49.014 killing process with pid 2052182 00:06:49.014 20:01:46 -- common/autotest_common.sh@945 -- # kill 2052182 00:06:49.014 20:01:46 -- common/autotest_common.sh@950 -- # wait 2052182 00:06:49.583 20:01:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.583 20:01:47 -- event/cpu_locks.sh@1 -- # cleanup 00:06:49.583 20:01:47 -- event/cpu_locks.sh@15 -- # [[ -z 2052165 ]] 00:06:49.583 20:01:47 -- event/cpu_locks.sh@15 -- # killprocess 2052165 00:06:49.583 20:01:47 -- common/autotest_common.sh@926 -- # '[' -z 2052165 ']' 00:06:49.583 20:01:47 -- common/autotest_common.sh@930 -- # kill -0 2052165 00:06:49.583 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2052165) - No such process 00:06:49.583 20:01:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2052165 is not found' 00:06:49.583 Process with pid 2052165 is not found 00:06:49.583 20:01:47 -- event/cpu_locks.sh@16 -- # [[ -z 2052182 ]] 00:06:49.583 20:01:47 -- event/cpu_locks.sh@16 -- # killprocess 2052182 00:06:49.583 20:01:47 -- common/autotest_common.sh@926 -- # '[' -z 2052182 ']' 00:06:49.583 20:01:47 -- common/autotest_common.sh@930 -- # kill -0 2052182 00:06:49.583 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2052182) - No such process 00:06:49.583 20:01:47 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2052182 is not found' 00:06:49.583 Process with pid 2052182 is not found 00:06:49.583 20:01:47 -- event/cpu_locks.sh@18 -- # rm -f 00:06:49.583 00:06:49.583 real 0m27.497s 00:06:49.583 user 0m45.311s 00:06:49.583 sys 0m9.465s 00:06:49.584 20:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.584 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.584 ************************************ 00:06:49.584 END TEST cpu_locks 00:06:49.584 ************************************ 00:06:49.584 00:06:49.584 real 0m55.548s 00:06:49.584 user 1m42.370s 00:06:49.584 sys 0m14.321s 00:06:49.584 20:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.584 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.584 ************************************ 00:06:49.584 END TEST event 00:06:49.584 ************************************ 00:06:49.584 20:01:47 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:06:49.584 20:01:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.584 20:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.584 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.584 ************************************ 00:06:49.584 START TEST thread 00:06:49.584 ************************************ 00:06:49.584 20:01:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh 00:06:49.843 * Looking for test storage... 00:06:49.843 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread 00:06:49.843 20:01:47 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.843 20:01:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:49.843 20:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.843 20:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:49.843 ************************************ 00:06:49.843 START TEST thread_poller_perf 00:06:49.843 ************************************ 00:06:49.843 20:01:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:49.843 [2024-04-25 20:01:47.582272] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:49.843 [2024-04-25 20:01:47.582367] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052807 ] 00:06:49.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.843 [2024-04-25 20:01:47.686607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.102 [2024-04-25 20:01:47.783419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.102 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:51.040 ====================================== 00:06:51.040 busy:2311883324 (cyc) 00:06:51.040 total_run_count: 259000 00:06:51.040 tsc_hz: 2300000000 (cyc) 00:06:51.040 ====================================== 00:06:51.040 poller_cost: 8926 (cyc), 3880 (nsec) 00:06:51.040 00:06:51.040 real 0m1.344s 00:06:51.040 user 0m1.217s 00:06:51.040 sys 0m0.119s 00:06:51.040 20:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.040 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.040 ************************************ 00:06:51.040 END TEST thread_poller_perf 00:06:51.040 ************************************ 00:06:51.040 20:01:48 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.040 20:01:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:51.040 20:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.040 20:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:51.040 ************************************ 00:06:51.040 START TEST thread_poller_perf 00:06:51.040 ************************************ 00:06:51.040 20:01:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:51.040 [2024-04-25 20:01:48.970703] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:51.040 [2024-04-25 20:01:48.970795] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053003 ] 00:06:51.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.299 [2024-04-25 20:01:49.077292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.299 [2024-04-25 20:01:49.175686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.299 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:52.677 ====================================== 00:06:52.677 busy:2303451000 (cyc) 00:06:52.677 total_run_count: 3474000 00:06:52.677 tsc_hz: 2300000000 (cyc) 00:06:52.677 ====================================== 00:06:52.677 poller_cost: 663 (cyc), 288 (nsec) 00:06:52.677 00:06:52.677 real 0m1.344s 00:06:52.677 user 0m1.224s 00:06:52.677 sys 0m0.113s 00:06:52.677 20:01:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.677 20:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 ************************************ 00:06:52.677 END TEST thread_poller_perf 00:06:52.677 ************************************ 00:06:52.677 20:01:50 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:52.677 00:06:52.677 real 0m2.879s 00:06:52.677 user 0m2.503s 00:06:52.677 sys 0m0.390s 00:06:52.677 20:01:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.677 20:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 ************************************ 00:06:52.677 END TEST thread 00:06:52.677 ************************************ 00:06:52.677 20:01:50 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:06:52.677 20:01:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.677 20:01:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.677 20:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 ************************************ 00:06:52.677 START TEST accel 00:06:52.677 ************************************ 00:06:52.677 20:01:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh 00:06:52.677 * Looking for test storage... 00:06:52.677 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:06:52.677 20:01:50 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:52.677 20:01:50 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:52.677 20:01:50 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:52.677 20:01:50 -- accel/accel.sh@59 -- # spdk_tgt_pid=2053245 00:06:52.677 20:01:50 -- accel/accel.sh@60 -- # waitforlisten 2053245 00:06:52.677 20:01:50 -- common/autotest_common.sh@819 -- # '[' -z 2053245 ']' 00:06:52.677 20:01:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.677 20:01:50 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:52.677 20:01:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.677 20:01:50 -- accel/accel.sh@58 -- # build_accel_config 00:06:52.677 20:01:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.677 20:01:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.677 20:01:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.677 20:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:52.677 20:01:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.677 20:01:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.677 20:01:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.677 20:01:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.677 20:01:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.677 20:01:50 -- accel/accel.sh@42 -- # jq -r . 00:06:52.677 [2024-04-25 20:01:50.529451] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:52.677 [2024-04-25 20:01:50.529527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053245 ] 00:06:52.677 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.937 [2024-04-25 20:01:50.635803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.937 [2024-04-25 20:01:50.732199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.937 [2024-04-25 20:01:50.732357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.195 [2024-04-25 20:01:50.926382] 'OCF_Core' volume operations registered 00:06:53.195 [2024-04-25 20:01:50.929886] 'OCF_Cache' volume operations registered 00:06:53.195 [2024-04-25 20:01:50.933845] 'OCF Composite' volume operations registered 00:06:53.195 [2024-04-25 20:01:50.937326] 'SPDK_block_device' volume operations registered 00:06:53.764 20:01:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.764 20:01:51 -- common/autotest_common.sh@852 -- # return 0 00:06:53.764 20:01:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:53.764 20:01:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:53.764 20:01:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:53.764 20:01:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.764 20:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:53.764 20:01:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # IFS== 00:06:53.764 20:01:51 -- accel/accel.sh@64 -- # read -r opc module 00:06:53.764 20:01:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:53.764 20:01:51 -- accel/accel.sh@67 -- # killprocess 2053245 00:06:53.764 20:01:51 -- common/autotest_common.sh@926 -- # '[' -z 2053245 ']' 00:06:53.764 20:01:51 -- common/autotest_common.sh@930 -- # kill -0 2053245 00:06:53.764 20:01:51 -- common/autotest_common.sh@931 -- # uname 00:06:53.764 20:01:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.764 20:01:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2053245 00:06:53.764 20:01:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.764 20:01:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.764 20:01:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2053245' 00:06:53.764 killing process with pid 2053245 00:06:53.764 20:01:51 -- common/autotest_common.sh@945 -- # kill 2053245 00:06:53.764 20:01:51 -- common/autotest_common.sh@950 -- # wait 2053245 00:06:54.333 20:01:52 -- accel/accel.sh@68 -- # trap - ERR 00:06:54.333 20:01:52 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:54.333 20:01:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:54.333 20:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.333 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.333 20:01:52 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:54.333 20:01:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:54.333 20:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.333 20:01:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.333 20:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.333 20:01:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.333 20:01:52 -- accel/accel.sh@42 -- # jq -r . 00:06:54.333 20:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.333 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.333 20:01:52 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:54.333 20:01:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:54.333 20:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.333 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.333 ************************************ 00:06:54.333 START TEST accel_missing_filename 00:06:54.333 ************************************ 00:06:54.333 20:01:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:54.333 20:01:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.333 20:01:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:54.333 20:01:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:54.333 20:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.333 20:01:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:54.333 20:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.333 20:01:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:54.333 20:01:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:54.333 20:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.333 20:01:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.333 20:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.333 20:01:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.333 20:01:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.333 20:01:52 -- accel/accel.sh@42 -- # jq -r . 00:06:54.333 [2024-04-25 20:01:52.183875] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:54.333 [2024-04-25 20:01:52.183957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053500 ] 00:06:54.333 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.593 [2024-04-25 20:01:52.292305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.593 [2024-04-25 20:01:52.391836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.593 [2024-04-25 20:01:52.443716] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.593 [2024-04-25 20:01:52.516336] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:54.852 A filename is required. 00:06:54.852 20:01:52 -- common/autotest_common.sh@643 -- # es=234 00:06:54.853 20:01:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.853 20:01:52 -- common/autotest_common.sh@652 -- # es=106 00:06:54.853 20:01:52 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:54.853 20:01:52 -- common/autotest_common.sh@660 -- # es=1 00:06:54.853 20:01:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.853 00:06:54.853 real 0m0.477s 00:06:54.853 user 0m0.336s 00:06:54.853 sys 0m0.180s 00:06:54.853 20:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.853 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.853 ************************************ 00:06:54.853 END TEST accel_missing_filename 00:06:54.853 ************************************ 00:06:54.853 20:01:52 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:06:54.853 20:01:52 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:54.853 20:01:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.853 20:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:54.853 ************************************ 00:06:54.853 START TEST accel_compress_verify 00:06:54.853 ************************************ 00:06:54.853 20:01:52 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:06:54.853 20:01:52 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.853 20:01:52 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:06:54.853 20:01:52 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:54.853 20:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.853 20:01:52 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:54.853 20:01:52 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.853 20:01:52 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:06:54.853 20:01:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:06:54.853 20:01:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.853 20:01:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.853 20:01:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.853 20:01:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.853 20:01:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.853 20:01:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.853 20:01:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.853 20:01:52 -- accel/accel.sh@42 -- # jq -r . 00:06:54.853 [2024-04-25 20:01:52.713830] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:54.853 [2024-04-25 20:01:52.713927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053647 ] 00:06:54.853 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.112 [2024-04-25 20:01:52.810019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.112 [2024-04-25 20:01:52.912817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.112 [2024-04-25 20:01:52.960155] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.112 [2024-04-25 20:01:53.022341] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:55.373 00:06:55.373 Compression does not support the verify option, aborting. 00:06:55.373 20:01:53 -- common/autotest_common.sh@643 -- # es=161 00:06:55.373 20:01:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.373 20:01:53 -- common/autotest_common.sh@652 -- # es=33 00:06:55.373 20:01:53 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:55.373 20:01:53 -- common/autotest_common.sh@660 -- # es=1 00:06:55.373 20:01:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.373 00:06:55.373 real 0m0.451s 00:06:55.373 user 0m0.331s 00:06:55.373 sys 0m0.161s 00:06:55.373 20:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.373 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.373 ************************************ 00:06:55.373 END TEST accel_compress_verify 00:06:55.373 ************************************ 00:06:55.373 20:01:53 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:55.373 20:01:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.373 20:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.373 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.373 ************************************ 00:06:55.373 START TEST accel_wrong_workload 00:06:55.373 ************************************ 00:06:55.373 20:01:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:55.373 20:01:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:55.373 20:01:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:55.373 20:01:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.373 20:01:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:55.373 20:01:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:55.373 20:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.373 20:01:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.373 20:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.373 20:01:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.373 20:01:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.373 Unsupported workload type: foobar 00:06:55.373 [2024-04-25 20:01:53.210474] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:55.373 accel_perf options: 00:06:55.373 [-h help message] 00:06:55.373 [-q queue depth per core] 00:06:55.373 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.373 [-T number of threads per core 00:06:55.373 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.373 [-t time in seconds] 00:06:55.373 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.373 [ dif_verify, , dif_generate, dif_generate_copy 00:06:55.373 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.373 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.373 [-S for crc32c workload, use this seed value (default 0) 00:06:55.373 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.373 [-f for fill workload, use this BYTE value (default 255) 00:06:55.373 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.373 [-y verify result if this switch is on] 00:06:55.373 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.373 Can be used to spread operations across a wider range of memory. 00:06:55.373 20:01:53 -- common/autotest_common.sh@643 -- # es=1 00:06:55.373 20:01:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.373 20:01:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:55.373 20:01:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.373 00:06:55.373 real 0m0.037s 00:06:55.373 user 0m0.019s 00:06:55.373 sys 0m0.018s 00:06:55.373 20:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.373 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.373 ************************************ 00:06:55.373 END TEST accel_wrong_workload 00:06:55.373 ************************************ 00:06:55.373 Error: writing output failed: Broken pipe 00:06:55.373 20:01:53 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.373 20:01:53 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:55.373 20:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.373 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.373 ************************************ 00:06:55.373 START TEST accel_negative_buffers 00:06:55.373 ************************************ 00:06:55.373 20:01:53 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:55.373 20:01:53 -- common/autotest_common.sh@640 -- # local es=0 00:06:55.373 20:01:53 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:55.373 20:01:53 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:55.373 20:01:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:55.373 20:01:53 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:55.373 20:01:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:55.373 20:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.373 20:01:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.373 20:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.373 20:01:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.373 20:01:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.373 20:01:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.373 -x option must be non-negative. 00:06:55.373 [2024-04-25 20:01:53.285249] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:55.373 accel_perf options: 00:06:55.373 [-h help message] 00:06:55.373 [-q queue depth per core] 00:06:55.373 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:55.373 [-T number of threads per core 00:06:55.373 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:55.373 [-t time in seconds] 00:06:55.373 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:55.373 [ dif_verify, , dif_generate, dif_generate_copy 00:06:55.373 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:55.373 [-l for compress/decompress workloads, name of uncompressed input file 00:06:55.373 [-S for crc32c workload, use this seed value (default 0) 00:06:55.373 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:55.373 [-f for fill workload, use this BYTE value (default 255) 00:06:55.373 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:55.373 [-y verify result if this switch is on] 00:06:55.373 [-a tasks to allocate per core (default: same value as -q)] 00:06:55.373 Can be used to spread operations across a wider range of memory. 00:06:55.373 20:01:53 -- common/autotest_common.sh@643 -- # es=1 00:06:55.373 20:01:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:55.373 20:01:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:55.373 20:01:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:55.373 00:06:55.373 real 0m0.025s 00:06:55.373 user 0m0.014s 00:06:55.373 sys 0m0.011s 00:06:55.373 20:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.373 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.373 ************************************ 00:06:55.373 END TEST accel_negative_buffers 00:06:55.373 ************************************ 00:06:55.632 Error: writing output failed: Broken pipe 00:06:55.632 20:01:53 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:55.632 20:01:53 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:55.632 20:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.632 20:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:55.632 ************************************ 00:06:55.632 START TEST accel_crc32c 00:06:55.632 ************************************ 00:06:55.632 20:01:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:55.632 20:01:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.632 20:01:53 -- accel/accel.sh@17 -- # local accel_module 00:06:55.632 20:01:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:55.632 20:01:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:55.632 20:01:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.632 20:01:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.632 20:01:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.632 20:01:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.632 20:01:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.632 20:01:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.632 20:01:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.632 20:01:53 -- accel/accel.sh@42 -- # jq -r . 00:06:55.632 [2024-04-25 20:01:53.365471] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:55.632 [2024-04-25 20:01:53.365542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053714 ] 00:06:55.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.632 [2024-04-25 20:01:53.476027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.892 [2024-04-25 20:01:53.582392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.271 20:01:54 -- accel/accel.sh@18 -- # out=' 00:06:57.271 SPDK Configuration: 00:06:57.271 Core mask: 0x1 00:06:57.271 00:06:57.271 Accel Perf Configuration: 00:06:57.271 Workload Type: crc32c 00:06:57.271 CRC-32C seed: 32 00:06:57.271 Transfer size: 4096 bytes 00:06:57.271 Vector count 1 00:06:57.271 Module: software 00:06:57.271 Queue depth: 32 00:06:57.271 Allocate depth: 32 00:06:57.271 # threads/core: 1 00:06:57.271 Run time: 1 seconds 00:06:57.271 Verify: Yes 00:06:57.271 00:06:57.271 Running for 1 seconds... 00:06:57.271 00:06:57.271 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.271 ------------------------------------------------------------------------------------ 00:06:57.271 0,0 369664/s 1444 MiB/s 0 0 00:06:57.271 ==================================================================================== 00:06:57.271 Total 369664/s 1444 MiB/s 0 0' 00:06:57.271 20:01:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:57.271 20:01:54 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:54 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:57.271 20:01:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.271 20:01:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.271 20:01:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.271 20:01:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.271 20:01:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.271 20:01:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.271 20:01:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.271 20:01:54 -- accel/accel.sh@42 -- # jq -r . 00:06:57.271 [2024-04-25 20:01:54.843716] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:57.271 [2024-04-25 20:01:54.843789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053921 ] 00:06:57.271 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.271 [2024-04-25 20:01:54.938037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.271 [2024-04-25 20:01:55.037173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val=0x1 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val=32 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.271 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.271 20:01:55 -- accel/accel.sh@21 -- # val=software 00:06:57.271 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val=32 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val=32 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val=1 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val=Yes 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:57.272 20:01:55 -- accel/accel.sh@21 -- # val= 00:06:57.272 20:01:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # IFS=: 00:06:57.272 20:01:55 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@21 -- # val= 00:06:58.670 20:01:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # IFS=: 00:06:58.670 20:01:56 -- accel/accel.sh@20 -- # read -r var val 00:06:58.670 20:01:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.670 20:01:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:58.670 20:01:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.670 00:06:58.670 real 0m2.947s 00:06:58.670 user 0m2.618s 00:06:58.670 sys 0m0.333s 00:06:58.670 20:01:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.670 20:01:56 -- common/autotest_common.sh@10 -- # set +x 00:06:58.670 ************************************ 00:06:58.670 END TEST accel_crc32c 00:06:58.670 ************************************ 00:06:58.670 20:01:56 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.670 20:01:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.670 20:01:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.670 20:01:56 -- common/autotest_common.sh@10 -- # set +x 00:06:58.670 ************************************ 00:06:58.670 START TEST accel_crc32c_C2 00:06:58.670 ************************************ 00:06:58.670 20:01:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.670 20:01:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.670 20:01:56 -- accel/accel.sh@17 -- # local accel_module 00:06:58.670 20:01:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.670 20:01:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.670 20:01:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.670 20:01:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.670 20:01:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.670 20:01:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.670 20:01:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.670 20:01:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.670 20:01:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.670 20:01:56 -- accel/accel.sh@42 -- # jq -r . 00:06:58.670 [2024-04-25 20:01:56.348284] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:06:58.670 [2024-04-25 20:01:56.348353] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054189 ] 00:06:58.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.670 [2024-04-25 20:01:56.451955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.670 [2024-04-25 20:01:56.550659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.046 20:01:57 -- accel/accel.sh@18 -- # out=' 00:07:00.046 SPDK Configuration: 00:07:00.046 Core mask: 0x1 00:07:00.046 00:07:00.046 Accel Perf Configuration: 00:07:00.046 Workload Type: crc32c 00:07:00.046 CRC-32C seed: 0 00:07:00.046 Transfer size: 4096 bytes 00:07:00.046 Vector count 2 00:07:00.046 Module: software 00:07:00.046 Queue depth: 32 00:07:00.046 Allocate depth: 32 00:07:00.046 # threads/core: 1 00:07:00.046 Run time: 1 seconds 00:07:00.046 Verify: Yes 00:07:00.046 00:07:00.046 Running for 1 seconds... 00:07:00.046 00:07:00.046 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.046 ------------------------------------------------------------------------------------ 00:07:00.046 0,0 292064/s 2281 MiB/s 0 0 00:07:00.046 ==================================================================================== 00:07:00.046 Total 292064/s 1140 MiB/s 0 0' 00:07:00.046 20:01:57 -- accel/accel.sh@20 -- # IFS=: 00:07:00.046 20:01:57 -- accel/accel.sh@20 -- # read -r var val 00:07:00.046 20:01:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.046 20:01:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.046 20:01:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.046 20:01:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.046 20:01:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.046 20:01:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.046 20:01:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.046 20:01:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.046 20:01:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.046 20:01:57 -- accel/accel.sh@42 -- # jq -r . 00:07:00.046 [2024-04-25 20:01:57.822010] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:00.046 [2024-04-25 20:01:57.822087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054420 ] 00:07:00.046 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.046 [2024-04-25 20:01:57.926609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.304 [2024-04-25 20:01:58.025701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=0x1 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=crc32c 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=0 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=software 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=32 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=32 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=1 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val=Yes 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:00.304 20:01:58 -- accel/accel.sh@21 -- # val= 00:07:00.304 20:01:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # IFS=: 00:07:00.304 20:01:58 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@21 -- # val= 00:07:01.678 20:01:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # IFS=: 00:07:01.678 20:01:59 -- accel/accel.sh@20 -- # read -r var val 00:07:01.678 20:01:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.678 20:01:59 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:01.678 20:01:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.678 00:07:01.678 real 0m2.940s 00:07:01.678 user 0m2.624s 00:07:01.678 sys 0m0.318s 00:07:01.678 20:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.678 20:01:59 -- common/autotest_common.sh@10 -- # set +x 00:07:01.678 ************************************ 00:07:01.678 END TEST accel_crc32c_C2 00:07:01.678 ************************************ 00:07:01.678 20:01:59 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.678 20:01:59 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.678 20:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.678 20:01:59 -- common/autotest_common.sh@10 -- # set +x 00:07:01.678 ************************************ 00:07:01.678 START TEST accel_copy 00:07:01.678 ************************************ 00:07:01.678 20:01:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:01.678 20:01:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.678 20:01:59 -- accel/accel.sh@17 -- # local accel_module 00:07:01.678 20:01:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:01.678 20:01:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.678 20:01:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.678 20:01:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.678 20:01:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.678 20:01:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.678 20:01:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.678 20:01:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.678 20:01:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.678 20:01:59 -- accel/accel.sh@42 -- # jq -r . 00:07:01.678 [2024-04-25 20:01:59.343117] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:01.678 [2024-04-25 20:01:59.343207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054634 ] 00:07:01.678 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.678 [2024-04-25 20:01:59.446445] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.678 [2024-04-25 20:01:59.541548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.056 20:02:00 -- accel/accel.sh@18 -- # out=' 00:07:03.056 SPDK Configuration: 00:07:03.056 Core mask: 0x1 00:07:03.056 00:07:03.056 Accel Perf Configuration: 00:07:03.056 Workload Type: copy 00:07:03.056 Transfer size: 4096 bytes 00:07:03.056 Vector count 1 00:07:03.056 Module: software 00:07:03.056 Queue depth: 32 00:07:03.056 Allocate depth: 32 00:07:03.056 # threads/core: 1 00:07:03.056 Run time: 1 seconds 00:07:03.057 Verify: Yes 00:07:03.057 00:07:03.057 Running for 1 seconds... 00:07:03.057 00:07:03.057 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.057 ------------------------------------------------------------------------------------ 00:07:03.057 0,0 276704/s 1080 MiB/s 0 0 00:07:03.057 ==================================================================================== 00:07:03.057 Total 276704/s 1080 MiB/s 0 0' 00:07:03.057 20:02:00 -- accel/accel.sh@20 -- # IFS=: 00:07:03.057 20:02:00 -- accel/accel.sh@20 -- # read -r var val 00:07:03.057 20:02:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:03.057 20:02:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.057 20:02:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:03.057 20:02:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.057 20:02:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.057 20:02:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.057 20:02:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.057 20:02:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.057 20:02:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.057 20:02:00 -- accel/accel.sh@42 -- # jq -r . 00:07:03.057 [2024-04-25 20:02:00.791579] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:03.057 [2024-04-25 20:02:00.791656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054823 ] 00:07:03.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.057 [2024-04-25 20:02:00.895077] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.316 [2024-04-25 20:02:00.993955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=0x1 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=copy 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=software 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=32 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=32 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.316 20:02:01 -- accel/accel.sh@21 -- # val=1 00:07:03.316 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.316 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 20:02:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.317 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 20:02:01 -- accel/accel.sh@21 -- # val=Yes 00:07:03.317 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.317 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:03.317 20:02:01 -- accel/accel.sh@21 -- # val= 00:07:03.317 20:02:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # IFS=: 00:07:03.317 20:02:01 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@21 -- # val= 00:07:04.695 20:02:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # IFS=: 00:07:04.695 20:02:02 -- accel/accel.sh@20 -- # read -r var val 00:07:04.695 20:02:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.695 20:02:02 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:04.695 20:02:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.695 00:07:04.695 real 0m2.919s 00:07:04.695 user 0m2.598s 00:07:04.695 sys 0m0.324s 00:07:04.695 20:02:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.695 20:02:02 -- common/autotest_common.sh@10 -- # set +x 00:07:04.695 ************************************ 00:07:04.695 END TEST accel_copy 00:07:04.695 ************************************ 00:07:04.695 20:02:02 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.695 20:02:02 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:04.695 20:02:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.695 20:02:02 -- common/autotest_common.sh@10 -- # set +x 00:07:04.695 ************************************ 00:07:04.695 START TEST accel_fill 00:07:04.695 ************************************ 00:07:04.695 20:02:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.695 20:02:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.695 20:02:02 -- accel/accel.sh@17 -- # local accel_module 00:07:04.695 20:02:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.695 20:02:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:04.695 20:02:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.695 20:02:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.695 20:02:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.695 20:02:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.695 20:02:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.695 20:02:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.695 20:02:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.695 20:02:02 -- accel/accel.sh@42 -- # jq -r . 00:07:04.695 [2024-04-25 20:02:02.303245] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:04.695 [2024-04-25 20:02:02.303316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055017 ] 00:07:04.695 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.695 [2024-04-25 20:02:02.400449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.695 [2024-04-25 20:02:02.499588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.074 20:02:03 -- accel/accel.sh@18 -- # out=' 00:07:06.074 SPDK Configuration: 00:07:06.074 Core mask: 0x1 00:07:06.074 00:07:06.074 Accel Perf Configuration: 00:07:06.074 Workload Type: fill 00:07:06.074 Fill pattern: 0x80 00:07:06.074 Transfer size: 4096 bytes 00:07:06.074 Vector count 1 00:07:06.074 Module: software 00:07:06.074 Queue depth: 64 00:07:06.074 Allocate depth: 64 00:07:06.074 # threads/core: 1 00:07:06.074 Run time: 1 seconds 00:07:06.074 Verify: Yes 00:07:06.074 00:07:06.074 Running for 1 seconds... 00:07:06.074 00:07:06.074 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.074 ------------------------------------------------------------------------------------ 00:07:06.074 0,0 426816/s 1667 MiB/s 0 0 00:07:06.074 ==================================================================================== 00:07:06.074 Total 426816/s 1667 MiB/s 0 0' 00:07:06.074 20:02:03 -- accel/accel.sh@20 -- # IFS=: 00:07:06.074 20:02:03 -- accel/accel.sh@20 -- # read -r var val 00:07:06.074 20:02:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.074 20:02:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.074 20:02:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:06.074 20:02:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.074 20:02:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.074 20:02:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.074 20:02:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.074 20:02:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.074 20:02:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.074 20:02:03 -- accel/accel.sh@42 -- # jq -r . 00:07:06.074 [2024-04-25 20:02:03.761457] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:06.074 [2024-04-25 20:02:03.761527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055203 ] 00:07:06.074 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.074 [2024-04-25 20:02:03.867062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.074 [2024-04-25 20:02:03.964986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=0x1 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=fill 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=0x80 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=software 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=64 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=64 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=1 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val=Yes 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:06.333 20:02:04 -- accel/accel.sh@21 -- # val= 00:07:06.333 20:02:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # IFS=: 00:07:06.333 20:02:04 -- accel/accel.sh@20 -- # read -r var val 00:07:07.271 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.271 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.271 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.271 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.532 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.532 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.532 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.532 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@21 -- # val= 00:07:07.532 20:02:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # IFS=: 00:07:07.532 20:02:05 -- accel/accel.sh@20 -- # read -r var val 00:07:07.532 20:02:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.532 20:02:05 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:07.532 20:02:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.532 00:07:07.532 real 0m2.937s 00:07:07.532 user 0m2.612s 00:07:07.532 sys 0m0.330s 00:07:07.532 20:02:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.532 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.532 ************************************ 00:07:07.532 END TEST accel_fill 00:07:07.532 ************************************ 00:07:07.532 20:02:05 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:07.532 20:02:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:07.532 20:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.532 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:07:07.532 ************************************ 00:07:07.532 START TEST accel_copy_crc32c 00:07:07.532 ************************************ 00:07:07.532 20:02:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:07.532 20:02:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.532 20:02:05 -- accel/accel.sh@17 -- # local accel_module 00:07:07.532 20:02:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:07.532 20:02:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:07.532 20:02:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.532 20:02:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.532 20:02:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.532 20:02:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.532 20:02:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.532 20:02:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.532 20:02:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.532 20:02:05 -- accel/accel.sh@42 -- # jq -r . 00:07:07.532 [2024-04-25 20:02:05.280386] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:07.532 [2024-04-25 20:02:05.280466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055399 ] 00:07:07.532 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.532 [2024-04-25 20:02:05.387877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.839 [2024-04-25 20:02:05.488741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.798 20:02:06 -- accel/accel.sh@18 -- # out=' 00:07:08.798 SPDK Configuration: 00:07:08.798 Core mask: 0x1 00:07:08.798 00:07:08.798 Accel Perf Configuration: 00:07:08.798 Workload Type: copy_crc32c 00:07:08.798 CRC-32C seed: 0 00:07:08.798 Vector size: 4096 bytes 00:07:08.798 Transfer size: 4096 bytes 00:07:08.798 Vector count 1 00:07:08.798 Module: software 00:07:08.798 Queue depth: 32 00:07:08.798 Allocate depth: 32 00:07:08.798 # threads/core: 1 00:07:08.798 Run time: 1 seconds 00:07:08.798 Verify: Yes 00:07:08.798 00:07:08.798 Running for 1 seconds... 00:07:08.798 00:07:08.798 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.798 ------------------------------------------------------------------------------------ 00:07:08.798 0,0 212416/s 829 MiB/s 0 0 00:07:08.798 ==================================================================================== 00:07:08.798 Total 212416/s 829 MiB/s 0 0' 00:07:08.798 20:02:06 -- accel/accel.sh@20 -- # IFS=: 00:07:08.798 20:02:06 -- accel/accel.sh@20 -- # read -r var val 00:07:08.798 20:02:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:09.057 20:02:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.057 20:02:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:09.057 20:02:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.057 20:02:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.057 20:02:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.057 20:02:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.057 20:02:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.057 20:02:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.057 20:02:06 -- accel/accel.sh@42 -- # jq -r . 00:07:09.057 [2024-04-25 20:02:06.759482] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:09.057 [2024-04-25 20:02:06.759569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055586 ] 00:07:09.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.057 [2024-04-25 20:02:06.864676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.057 [2024-04-25 20:02:06.966168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.315 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.315 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.315 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.315 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=0x1 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=0 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=software 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=32 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=32 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=1 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val=Yes 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:09.316 20:02:07 -- accel/accel.sh@21 -- # val= 00:07:09.316 20:02:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # IFS=: 00:07:09.316 20:02:07 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@21 -- # val= 00:07:10.692 20:02:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # IFS=: 00:07:10.692 20:02:08 -- accel/accel.sh@20 -- # read -r var val 00:07:10.692 20:02:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.693 20:02:08 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:10.693 20:02:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.693 00:07:10.693 real 0m2.956s 00:07:10.693 user 0m2.637s 00:07:10.693 sys 0m0.326s 00:07:10.693 20:02:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.693 20:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 ************************************ 00:07:10.693 END TEST accel_copy_crc32c 00:07:10.693 ************************************ 00:07:10.693 20:02:08 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.693 20:02:08 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:10.693 20:02:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.693 20:02:08 -- common/autotest_common.sh@10 -- # set +x 00:07:10.693 ************************************ 00:07:10.693 START TEST accel_copy_crc32c_C2 00:07:10.693 ************************************ 00:07:10.693 20:02:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:10.693 20:02:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.693 20:02:08 -- accel/accel.sh@17 -- # local accel_module 00:07:10.693 20:02:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.693 20:02:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.693 20:02:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.693 20:02:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.693 20:02:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.693 20:02:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.693 20:02:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.693 20:02:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.693 20:02:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.693 20:02:08 -- accel/accel.sh@42 -- # jq -r . 00:07:10.693 [2024-04-25 20:02:08.286873] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:10.693 [2024-04-25 20:02:08.286954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055797 ] 00:07:10.693 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.693 [2024-04-25 20:02:08.394796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.693 [2024-04-25 20:02:08.499285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.072 20:02:09 -- accel/accel.sh@18 -- # out=' 00:07:12.072 SPDK Configuration: 00:07:12.072 Core mask: 0x1 00:07:12.072 00:07:12.072 Accel Perf Configuration: 00:07:12.072 Workload Type: copy_crc32c 00:07:12.072 CRC-32C seed: 0 00:07:12.072 Vector size: 4096 bytes 00:07:12.072 Transfer size: 8192 bytes 00:07:12.072 Vector count 2 00:07:12.072 Module: software 00:07:12.072 Queue depth: 32 00:07:12.072 Allocate depth: 32 00:07:12.072 # threads/core: 1 00:07:12.072 Run time: 1 seconds 00:07:12.072 Verify: Yes 00:07:12.072 00:07:12.072 Running for 1 seconds... 00:07:12.072 00:07:12.072 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.072 ------------------------------------------------------------------------------------ 00:07:12.072 0,0 152832/s 1194 MiB/s 0 0 00:07:12.072 ==================================================================================== 00:07:12.072 Total 152832/s 597 MiB/s 0 0' 00:07:12.072 20:02:09 -- accel/accel.sh@20 -- # IFS=: 00:07:12.072 20:02:09 -- accel/accel.sh@20 -- # read -r var val 00:07:12.072 20:02:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:12.072 20:02:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.072 20:02:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:12.072 20:02:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.072 20:02:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.072 20:02:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.072 20:02:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.072 20:02:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.072 20:02:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.072 20:02:09 -- accel/accel.sh@42 -- # jq -r . 00:07:12.072 [2024-04-25 20:02:09.769806] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:12.072 [2024-04-25 20:02:09.769877] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056039 ] 00:07:12.072 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.072 [2024-04-25 20:02:09.877311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.072 [2024-04-25 20:02:09.976246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val=0x1 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val=0 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:12.331 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.331 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.331 20:02:10 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val=software 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val=32 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val=32 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val=1 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val=Yes 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:12.332 20:02:10 -- accel/accel.sh@21 -- # val= 00:07:12.332 20:02:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # IFS=: 00:07:12.332 20:02:10 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@21 -- # val= 00:07:13.710 20:02:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # IFS=: 00:07:13.710 20:02:11 -- accel/accel.sh@20 -- # read -r var val 00:07:13.710 20:02:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.710 20:02:11 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:13.710 20:02:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.710 00:07:13.710 real 0m2.957s 00:07:13.710 user 0m2.618s 00:07:13.710 sys 0m0.345s 00:07:13.710 20:02:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.711 20:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.711 ************************************ 00:07:13.711 END TEST accel_copy_crc32c_C2 00:07:13.711 ************************************ 00:07:13.711 20:02:11 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:13.711 20:02:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:13.711 20:02:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.711 20:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:13.711 ************************************ 00:07:13.711 START TEST accel_dualcast 00:07:13.711 ************************************ 00:07:13.711 20:02:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:13.711 20:02:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.711 20:02:11 -- accel/accel.sh@17 -- # local accel_module 00:07:13.711 20:02:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:13.711 20:02:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.711 20:02:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.711 20:02:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.711 20:02:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.711 20:02:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.711 20:02:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.711 20:02:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.711 20:02:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.711 20:02:11 -- accel/accel.sh@42 -- # jq -r . 00:07:13.711 [2024-04-25 20:02:11.283473] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:13.711 [2024-04-25 20:02:11.283542] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056314 ] 00:07:13.711 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.711 [2024-04-25 20:02:11.391157] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.711 [2024-04-25 20:02:11.489946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.091 20:02:12 -- accel/accel.sh@18 -- # out=' 00:07:15.091 SPDK Configuration: 00:07:15.091 Core mask: 0x1 00:07:15.091 00:07:15.091 Accel Perf Configuration: 00:07:15.091 Workload Type: dualcast 00:07:15.091 Transfer size: 4096 bytes 00:07:15.091 Vector count 1 00:07:15.091 Module: software 00:07:15.091 Queue depth: 32 00:07:15.091 Allocate depth: 32 00:07:15.091 # threads/core: 1 00:07:15.091 Run time: 1 seconds 00:07:15.091 Verify: Yes 00:07:15.091 00:07:15.091 Running for 1 seconds... 00:07:15.091 00:07:15.091 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:15.091 ------------------------------------------------------------------------------------ 00:07:15.091 0,0 326784/s 1276 MiB/s 0 0 00:07:15.091 ==================================================================================== 00:07:15.091 Total 326784/s 1276 MiB/s 0 0' 00:07:15.091 20:02:12 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:12 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:15.091 20:02:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.091 20:02:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.091 20:02:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.091 20:02:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.091 20:02:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.091 20:02:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.091 20:02:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.091 20:02:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.091 20:02:12 -- accel/accel.sh@42 -- # jq -r . 00:07:15.091 [2024-04-25 20:02:12.760360] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:15.091 [2024-04-25 20:02:12.760444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056516 ] 00:07:15.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.091 [2024-04-25 20:02:12.864533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.091 [2024-04-25 20:02:12.963150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val=0x1 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val=dualcast 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val=software 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val=32 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.091 20:02:13 -- accel/accel.sh@21 -- # val=32 00:07:15.091 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.091 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.349 20:02:13 -- accel/accel.sh@21 -- # val=1 00:07:15.349 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.349 20:02:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:15.349 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.349 20:02:13 -- accel/accel.sh@21 -- # val=Yes 00:07:15.349 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.349 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.349 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:15.349 20:02:13 -- accel/accel.sh@21 -- # val= 00:07:15.349 20:02:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # IFS=: 00:07:15.349 20:02:13 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@21 -- # val= 00:07:16.284 20:02:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # IFS=: 00:07:16.284 20:02:14 -- accel/accel.sh@20 -- # read -r var val 00:07:16.284 20:02:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:16.284 20:02:14 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:16.284 20:02:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.284 00:07:16.284 real 0m2.954s 00:07:16.284 user 0m2.610s 00:07:16.284 sys 0m0.347s 00:07:16.284 20:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.284 20:02:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.284 ************************************ 00:07:16.284 END TEST accel_dualcast 00:07:16.284 ************************************ 00:07:16.544 20:02:14 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:16.544 20:02:14 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:16.544 20:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:16.544 20:02:14 -- common/autotest_common.sh@10 -- # set +x 00:07:16.544 ************************************ 00:07:16.544 START TEST accel_compare 00:07:16.544 ************************************ 00:07:16.544 20:02:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:16.544 20:02:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:16.544 20:02:14 -- accel/accel.sh@17 -- # local accel_module 00:07:16.544 20:02:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:16.544 20:02:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.544 20:02:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.544 20:02:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.544 20:02:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.544 20:02:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.544 20:02:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.544 20:02:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.544 20:02:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.544 20:02:14 -- accel/accel.sh@42 -- # jq -r . 00:07:16.544 [2024-04-25 20:02:14.288429] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:16.544 [2024-04-25 20:02:14.288511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056711 ] 00:07:16.544 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.544 [2024-04-25 20:02:14.397666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.802 [2024-04-25 20:02:14.502421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.181 20:02:15 -- accel/accel.sh@18 -- # out=' 00:07:18.181 SPDK Configuration: 00:07:18.181 Core mask: 0x1 00:07:18.181 00:07:18.181 Accel Perf Configuration: 00:07:18.181 Workload Type: compare 00:07:18.181 Transfer size: 4096 bytes 00:07:18.181 Vector count 1 00:07:18.181 Module: software 00:07:18.181 Queue depth: 32 00:07:18.181 Allocate depth: 32 00:07:18.181 # threads/core: 1 00:07:18.181 Run time: 1 seconds 00:07:18.181 Verify: Yes 00:07:18.181 00:07:18.181 Running for 1 seconds... 00:07:18.181 00:07:18.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:18.181 ------------------------------------------------------------------------------------ 00:07:18.181 0,0 398208/s 1555 MiB/s 0 0 00:07:18.181 ==================================================================================== 00:07:18.181 Total 398208/s 1555 MiB/s 0 0' 00:07:18.181 20:02:15 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:15 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:18.181 20:02:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.181 20:02:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:18.181 20:02:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.181 20:02:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.181 20:02:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.181 20:02:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.181 20:02:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.181 20:02:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.181 20:02:15 -- accel/accel.sh@42 -- # jq -r . 00:07:18.181 [2024-04-25 20:02:15.772657] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:18.181 [2024-04-25 20:02:15.772729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056898 ] 00:07:18.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.181 [2024-04-25 20:02:15.879959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.181 [2024-04-25 20:02:15.978778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=0x1 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=compare 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=software 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=32 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=32 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=1 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val=Yes 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:18.181 20:02:16 -- accel/accel.sh@21 -- # val= 00:07:18.181 20:02:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # IFS=: 00:07:18.181 20:02:16 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@21 -- # val= 00:07:19.560 20:02:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # IFS=: 00:07:19.560 20:02:17 -- accel/accel.sh@20 -- # read -r var val 00:07:19.560 20:02:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.560 20:02:17 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:19.560 20:02:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.560 00:07:19.560 real 0m2.966s 00:07:19.560 user 0m2.624s 00:07:19.560 sys 0m0.346s 00:07:19.560 20:02:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.560 20:02:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.560 ************************************ 00:07:19.560 END TEST accel_compare 00:07:19.560 ************************************ 00:07:19.560 20:02:17 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:19.560 20:02:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:19.560 20:02:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:19.560 20:02:17 -- common/autotest_common.sh@10 -- # set +x 00:07:19.560 ************************************ 00:07:19.560 START TEST accel_xor 00:07:19.560 ************************************ 00:07:19.560 20:02:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:19.560 20:02:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.560 20:02:17 -- accel/accel.sh@17 -- # local accel_module 00:07:19.560 20:02:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:19.560 20:02:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.560 20:02:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.560 20:02:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.560 20:02:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.560 20:02:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.560 20:02:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.560 20:02:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.560 20:02:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.560 20:02:17 -- accel/accel.sh@42 -- # jq -r . 00:07:19.560 [2024-04-25 20:02:17.296246] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:19.560 [2024-04-25 20:02:17.296315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057097 ] 00:07:19.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.560 [2024-04-25 20:02:17.402423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.819 [2024-04-25 20:02:17.497903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.198 20:02:18 -- accel/accel.sh@18 -- # out=' 00:07:21.198 SPDK Configuration: 00:07:21.198 Core mask: 0x1 00:07:21.198 00:07:21.198 Accel Perf Configuration: 00:07:21.198 Workload Type: xor 00:07:21.198 Source buffers: 2 00:07:21.198 Transfer size: 4096 bytes 00:07:21.198 Vector count 1 00:07:21.198 Module: software 00:07:21.198 Queue depth: 32 00:07:21.198 Allocate depth: 32 00:07:21.198 # threads/core: 1 00:07:21.198 Run time: 1 seconds 00:07:21.198 Verify: Yes 00:07:21.198 00:07:21.198 Running for 1 seconds... 00:07:21.198 00:07:21.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.198 ------------------------------------------------------------------------------------ 00:07:21.198 0,0 325440/s 1271 MiB/s 0 0 00:07:21.198 ==================================================================================== 00:07:21.198 Total 325440/s 1271 MiB/s 0 0' 00:07:21.198 20:02:18 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:18 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:21.198 20:02:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.198 20:02:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.198 20:02:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.198 20:02:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.198 20:02:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.198 20:02:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.198 20:02:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.198 20:02:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.198 20:02:18 -- accel/accel.sh@42 -- # jq -r . 00:07:21.198 [2024-04-25 20:02:18.752569] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:21.198 [2024-04-25 20:02:18.752658] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057275 ] 00:07:21.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.198 [2024-04-25 20:02:18.858230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.198 [2024-04-25 20:02:18.955682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=0x1 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=xor 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=2 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=software 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=32 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=32 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=1 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val=Yes 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:21.198 20:02:19 -- accel/accel.sh@21 -- # val= 00:07:21.198 20:02:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # IFS=: 00:07:21.198 20:02:19 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@21 -- # val= 00:07:22.576 20:02:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # IFS=: 00:07:22.576 20:02:20 -- accel/accel.sh@20 -- # read -r var val 00:07:22.576 20:02:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.576 20:02:20 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:22.576 20:02:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.576 00:07:22.576 real 0m2.920s 00:07:22.576 user 0m2.595s 00:07:22.576 sys 0m0.328s 00:07:22.576 20:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.576 20:02:20 -- common/autotest_common.sh@10 -- # set +x 00:07:22.576 ************************************ 00:07:22.576 END TEST accel_xor 00:07:22.576 ************************************ 00:07:22.576 20:02:20 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:22.576 20:02:20 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:22.576 20:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.576 20:02:20 -- common/autotest_common.sh@10 -- # set +x 00:07:22.576 ************************************ 00:07:22.576 START TEST accel_xor 00:07:22.576 ************************************ 00:07:22.576 20:02:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:22.576 20:02:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.576 20:02:20 -- accel/accel.sh@17 -- # local accel_module 00:07:22.576 20:02:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.576 20:02:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.576 20:02:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.576 20:02:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.576 20:02:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.576 20:02:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.576 20:02:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.576 20:02:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.577 20:02:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.577 20:02:20 -- accel/accel.sh@42 -- # jq -r . 00:07:22.577 [2024-04-25 20:02:20.263166] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:22.577 [2024-04-25 20:02:20.263239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057480 ] 00:07:22.577 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.577 [2024-04-25 20:02:20.369883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.577 [2024-04-25 20:02:20.469732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.953 20:02:21 -- accel/accel.sh@18 -- # out=' 00:07:23.953 SPDK Configuration: 00:07:23.953 Core mask: 0x1 00:07:23.953 00:07:23.953 Accel Perf Configuration: 00:07:23.953 Workload Type: xor 00:07:23.953 Source buffers: 3 00:07:23.953 Transfer size: 4096 bytes 00:07:23.953 Vector count 1 00:07:23.953 Module: software 00:07:23.953 Queue depth: 32 00:07:23.953 Allocate depth: 32 00:07:23.953 # threads/core: 1 00:07:23.953 Run time: 1 seconds 00:07:23.953 Verify: Yes 00:07:23.953 00:07:23.953 Running for 1 seconds... 00:07:23.953 00:07:23.953 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:23.953 ------------------------------------------------------------------------------------ 00:07:23.953 0,0 305824/s 1194 MiB/s 0 0 00:07:23.953 ==================================================================================== 00:07:23.953 Total 305824/s 1194 MiB/s 0 0' 00:07:23.953 20:02:21 -- accel/accel.sh@20 -- # IFS=: 00:07:23.953 20:02:21 -- accel/accel.sh@20 -- # read -r var val 00:07:23.954 20:02:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:23.954 20:02:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.954 20:02:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:23.954 20:02:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.954 20:02:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.954 20:02:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.954 20:02:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.954 20:02:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.954 20:02:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.954 20:02:21 -- accel/accel.sh@42 -- # jq -r . 00:07:23.954 [2024-04-25 20:02:21.741591] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:23.954 [2024-04-25 20:02:21.741682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057658 ] 00:07:23.954 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.954 [2024-04-25 20:02:21.847618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.213 [2024-04-25 20:02:21.948121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.213 20:02:21 -- accel/accel.sh@21 -- # val= 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.213 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.213 20:02:22 -- accel/accel.sh@21 -- # val=0x1 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.213 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.213 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.213 20:02:22 -- accel/accel.sh@21 -- # val=xor 00:07:24.213 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.213 20:02:22 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.213 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=3 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=software 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=32 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=32 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=1 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val=Yes 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:24.214 20:02:22 -- accel/accel.sh@21 -- # val= 00:07:24.214 20:02:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # IFS=: 00:07:24.214 20:02:22 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@21 -- # val= 00:07:25.603 20:02:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # IFS=: 00:07:25.603 20:02:23 -- accel/accel.sh@20 -- # read -r var val 00:07:25.603 20:02:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.603 20:02:23 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:25.603 20:02:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.603 00:07:25.603 real 0m2.961s 00:07:25.603 user 0m2.636s 00:07:25.603 sys 0m0.329s 00:07:25.603 20:02:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.603 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 ************************************ 00:07:25.603 END TEST accel_xor 00:07:25.603 ************************************ 00:07:25.603 20:02:23 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:25.603 20:02:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:25.603 20:02:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.603 20:02:23 -- common/autotest_common.sh@10 -- # set +x 00:07:25.603 ************************************ 00:07:25.603 START TEST accel_dif_verify 00:07:25.603 ************************************ 00:07:25.603 20:02:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:25.603 20:02:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.603 20:02:23 -- accel/accel.sh@17 -- # local accel_module 00:07:25.603 20:02:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:25.603 20:02:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.603 20:02:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.603 20:02:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.603 20:02:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.603 20:02:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.603 20:02:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.603 20:02:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.603 20:02:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.603 20:02:23 -- accel/accel.sh@42 -- # jq -r . 00:07:25.603 [2024-04-25 20:02:23.270497] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:25.603 [2024-04-25 20:02:23.270569] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2057906 ] 00:07:25.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.603 [2024-04-25 20:02:23.377827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.603 [2024-04-25 20:02:23.476975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.991 20:02:24 -- accel/accel.sh@18 -- # out=' 00:07:26.991 SPDK Configuration: 00:07:26.991 Core mask: 0x1 00:07:26.991 00:07:26.991 Accel Perf Configuration: 00:07:26.991 Workload Type: dif_verify 00:07:26.991 Vector size: 4096 bytes 00:07:26.991 Transfer size: 4096 bytes 00:07:26.991 Block size: 512 bytes 00:07:26.991 Metadata size: 8 bytes 00:07:26.991 Vector count 1 00:07:26.991 Module: software 00:07:26.991 Queue depth: 32 00:07:26.991 Allocate depth: 32 00:07:26.991 # threads/core: 1 00:07:26.991 Run time: 1 seconds 00:07:26.991 Verify: No 00:07:26.991 00:07:26.991 Running for 1 seconds... 00:07:26.991 00:07:26.991 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:26.991 ------------------------------------------------------------------------------------ 00:07:26.991 0,0 85024/s 337 MiB/s 0 0 00:07:26.991 ==================================================================================== 00:07:26.991 Total 85024/s 332 MiB/s 0 0' 00:07:26.991 20:02:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:26.991 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:26.991 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:26.991 20:02:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:26.991 20:02:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.991 20:02:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.991 20:02:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.991 20:02:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.991 20:02:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.991 20:02:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.991 20:02:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.991 20:02:24 -- accel/accel.sh@42 -- # jq -r . 00:07:26.991 [2024-04-25 20:02:24.738110] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:26.991 [2024-04-25 20:02:24.738183] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058137 ] 00:07:26.991 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.991 [2024-04-25 20:02:24.833490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.251 [2024-04-25 20:02:24.935113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val= 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val= 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val=0x1 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val= 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val= 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val=dif_verify 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val= 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val=software 00:07:27.251 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.251 20:02:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.251 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.251 20:02:24 -- accel/accel.sh@21 -- # val=32 00:07:27.252 20:02:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:24 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:24 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val=32 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val=1 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val=No 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val= 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:27.252 20:02:25 -- accel/accel.sh@21 -- # val= 00:07:27.252 20:02:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # IFS=: 00:07:27.252 20:02:25 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.248 20:02:26 -- accel/accel.sh@21 -- # val= 00:07:28.248 20:02:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # IFS=: 00:07:28.248 20:02:26 -- accel/accel.sh@20 -- # read -r var val 00:07:28.508 20:02:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.508 20:02:26 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:28.508 20:02:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.508 00:07:28.508 real 0m2.941s 00:07:28.508 user 0m2.624s 00:07:28.508 sys 0m0.323s 00:07:28.508 20:02:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.508 20:02:26 -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 ************************************ 00:07:28.508 END TEST accel_dif_verify 00:07:28.508 ************************************ 00:07:28.508 20:02:26 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:28.508 20:02:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:28.508 20:02:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:28.508 20:02:26 -- common/autotest_common.sh@10 -- # set +x 00:07:28.508 ************************************ 00:07:28.508 START TEST accel_dif_generate 00:07:28.508 ************************************ 00:07:28.508 20:02:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:28.508 20:02:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.508 20:02:26 -- accel/accel.sh@17 -- # local accel_module 00:07:28.508 20:02:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:28.508 20:02:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.508 20:02:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.508 20:02:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.508 20:02:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.508 20:02:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.508 20:02:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.508 20:02:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.508 20:02:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.508 20:02:26 -- accel/accel.sh@42 -- # jq -r . 00:07:28.508 [2024-04-25 20:02:26.257963] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:28.508 [2024-04-25 20:02:26.258034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058404 ] 00:07:28.508 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.508 [2024-04-25 20:02:26.364814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.768 [2024-04-25 20:02:26.462341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.146 20:02:27 -- accel/accel.sh@18 -- # out=' 00:07:30.146 SPDK Configuration: 00:07:30.146 Core mask: 0x1 00:07:30.146 00:07:30.146 Accel Perf Configuration: 00:07:30.146 Workload Type: dif_generate 00:07:30.146 Vector size: 4096 bytes 00:07:30.146 Transfer size: 4096 bytes 00:07:30.146 Block size: 512 bytes 00:07:30.146 Metadata size: 8 bytes 00:07:30.146 Vector count 1 00:07:30.146 Module: software 00:07:30.146 Queue depth: 32 00:07:30.146 Allocate depth: 32 00:07:30.146 # threads/core: 1 00:07:30.146 Run time: 1 seconds 00:07:30.146 Verify: No 00:07:30.146 00:07:30.146 Running for 1 seconds... 00:07:30.146 00:07:30.146 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.146 ------------------------------------------------------------------------------------ 00:07:30.146 0,0 102016/s 404 MiB/s 0 0 00:07:30.146 ==================================================================================== 00:07:30.146 Total 102016/s 398 MiB/s 0 0' 00:07:30.146 20:02:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:30.146 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.146 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.146 20:02:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.146 20:02:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.146 20:02:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.146 20:02:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.147 20:02:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.147 20:02:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.147 20:02:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.147 20:02:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.147 20:02:27 -- accel/accel.sh@42 -- # jq -r . 00:07:30.147 [2024-04-25 20:02:27.712076] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:30.147 [2024-04-25 20:02:27.712144] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058589 ] 00:07:30.147 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.147 [2024-04-25 20:02:27.805233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.147 [2024-04-25 20:02:27.901653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=0x1 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=dif_generate 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=software 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=32 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=32 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=1 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val=No 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:30.147 20:02:27 -- accel/accel.sh@21 -- # val= 00:07:30.147 20:02:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # IFS=: 00:07:30.147 20:02:27 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@21 -- # val= 00:07:31.524 20:02:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # IFS=: 00:07:31.524 20:02:29 -- accel/accel.sh@20 -- # read -r var val 00:07:31.524 20:02:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.524 20:02:29 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:31.524 20:02:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.524 00:07:31.524 real 0m2.900s 00:07:31.524 user 0m2.604s 00:07:31.524 sys 0m0.301s 00:07:31.525 20:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.525 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:07:31.525 ************************************ 00:07:31.525 END TEST accel_dif_generate 00:07:31.525 ************************************ 00:07:31.525 20:02:29 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:31.525 20:02:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:31.525 20:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.525 20:02:29 -- common/autotest_common.sh@10 -- # set +x 00:07:31.525 ************************************ 00:07:31.525 START TEST accel_dif_generate_copy 00:07:31.525 ************************************ 00:07:31.525 20:02:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:31.525 20:02:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.525 20:02:29 -- accel/accel.sh@17 -- # local accel_module 00:07:31.525 20:02:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:31.525 20:02:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:31.525 20:02:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.525 20:02:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.525 20:02:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.525 20:02:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.525 20:02:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.525 20:02:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.525 20:02:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.525 20:02:29 -- accel/accel.sh@42 -- # jq -r . 00:07:31.525 [2024-04-25 20:02:29.204915] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:31.525 [2024-04-25 20:02:29.204995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058791 ] 00:07:31.525 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.525 [2024-04-25 20:02:29.310326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.525 [2024-04-25 20:02:29.408622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.901 20:02:30 -- accel/accel.sh@18 -- # out=' 00:07:32.901 SPDK Configuration: 00:07:32.901 Core mask: 0x1 00:07:32.901 00:07:32.901 Accel Perf Configuration: 00:07:32.901 Workload Type: dif_generate_copy 00:07:32.901 Vector size: 4096 bytes 00:07:32.901 Transfer size: 4096 bytes 00:07:32.901 Vector count 1 00:07:32.901 Module: software 00:07:32.901 Queue depth: 32 00:07:32.901 Allocate depth: 32 00:07:32.901 # threads/core: 1 00:07:32.901 Run time: 1 seconds 00:07:32.901 Verify: No 00:07:32.901 00:07:32.901 Running for 1 seconds... 00:07:32.901 00:07:32.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.901 ------------------------------------------------------------------------------------ 00:07:32.901 0,0 78016/s 309 MiB/s 0 0 00:07:32.901 ==================================================================================== 00:07:32.901 Total 78016/s 304 MiB/s 0 0' 00:07:32.901 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:32.901 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:32.901 20:02:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.901 20:02:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.901 20:02:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.901 20:02:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.901 20:02:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.901 20:02:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.901 20:02:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.901 20:02:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.901 20:02:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.901 20:02:30 -- accel/accel.sh@42 -- # jq -r . 00:07:32.901 [2024-04-25 20:02:30.673187] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:32.901 [2024-04-25 20:02:30.673258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058969 ] 00:07:32.901 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.901 [2024-04-25 20:02:30.779656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.161 [2024-04-25 20:02:30.880327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=0x1 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=software 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=32 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=32 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=1 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val=No 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:33.161 20:02:30 -- accel/accel.sh@21 -- # val= 00:07:33.161 20:02:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # IFS=: 00:07:33.161 20:02:30 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@21 -- # val= 00:07:34.545 20:02:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # IFS=: 00:07:34.545 20:02:32 -- accel/accel.sh@20 -- # read -r var val 00:07:34.545 20:02:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.545 20:02:32 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:34.545 20:02:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.545 00:07:34.545 real 0m2.951s 00:07:34.545 user 0m2.615s 00:07:34.545 sys 0m0.338s 00:07:34.545 20:02:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.545 20:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:34.545 ************************************ 00:07:34.545 END TEST accel_dif_generate_copy 00:07:34.545 ************************************ 00:07:34.545 20:02:32 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:34.545 20:02:32 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:34.545 20:02:32 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:34.545 20:02:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.545 20:02:32 -- common/autotest_common.sh@10 -- # set +x 00:07:34.545 ************************************ 00:07:34.545 START TEST accel_comp 00:07:34.545 ************************************ 00:07:34.546 20:02:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:34.546 20:02:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.546 20:02:32 -- accel/accel.sh@17 -- # local accel_module 00:07:34.546 20:02:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:34.546 20:02:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:34.546 20:02:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.546 20:02:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.546 20:02:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.546 20:02:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.546 20:02:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.546 20:02:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.546 20:02:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.546 20:02:32 -- accel/accel.sh@42 -- # jq -r . 00:07:34.546 [2024-04-25 20:02:32.197403] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:34.546 [2024-04-25 20:02:32.197472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059173 ] 00:07:34.546 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.546 [2024-04-25 20:02:32.303512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.546 [2024-04-25 20:02:32.401943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.925 20:02:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.925 00:07:35.925 SPDK Configuration: 00:07:35.925 Core mask: 0x1 00:07:35.925 00:07:35.925 Accel Perf Configuration: 00:07:35.925 Workload Type: compress 00:07:35.925 Transfer size: 4096 bytes 00:07:35.925 Vector count 1 00:07:35.925 Module: software 00:07:35.925 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:35.925 Queue depth: 32 00:07:35.925 Allocate depth: 32 00:07:35.925 # threads/core: 1 00:07:35.925 Run time: 1 seconds 00:07:35.925 Verify: No 00:07:35.925 00:07:35.925 Running for 1 seconds... 00:07:35.925 00:07:35.925 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.925 ------------------------------------------------------------------------------------ 00:07:35.925 0,0 42528/s 177 MiB/s 0 0 00:07:35.925 ==================================================================================== 00:07:35.925 Total 42528/s 166 MiB/s 0 0' 00:07:35.925 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:35.925 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:35.925 20:02:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:35.925 20:02:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.925 20:02:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:35.925 20:02:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.925 20:02:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.925 20:02:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.925 20:02:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.925 20:02:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.925 20:02:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.925 20:02:33 -- accel/accel.sh@42 -- # jq -r . 00:07:35.925 [2024-04-25 20:02:33.675524] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:35.925 [2024-04-25 20:02:33.675607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059351 ] 00:07:35.925 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.925 [2024-04-25 20:02:33.779978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.185 [2024-04-25 20:02:33.879146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=0x1 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=compress 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=software 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=32 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=32 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=1 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val=No 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:36.185 20:02:33 -- accel/accel.sh@21 -- # val= 00:07:36.185 20:02:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # IFS=: 00:07:36.185 20:02:33 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@21 -- # val= 00:07:37.565 20:02:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # IFS=: 00:07:37.565 20:02:35 -- accel/accel.sh@20 -- # read -r var val 00:07:37.565 20:02:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.565 20:02:35 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:37.565 20:02:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.565 00:07:37.565 real 0m2.960s 00:07:37.565 user 0m2.617s 00:07:37.565 sys 0m0.347s 00:07:37.565 20:02:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.565 20:02:35 -- common/autotest_common.sh@10 -- # set +x 00:07:37.565 ************************************ 00:07:37.565 END TEST accel_comp 00:07:37.565 ************************************ 00:07:37.565 20:02:35 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:37.565 20:02:35 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:37.565 20:02:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.565 20:02:35 -- common/autotest_common.sh@10 -- # set +x 00:07:37.565 ************************************ 00:07:37.565 START TEST accel_decomp 00:07:37.565 ************************************ 00:07:37.565 20:02:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:37.565 20:02:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.565 20:02:35 -- accel/accel.sh@17 -- # local accel_module 00:07:37.565 20:02:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:37.565 20:02:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:37.565 20:02:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.565 20:02:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.565 20:02:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.565 20:02:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.565 20:02:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.565 20:02:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.565 20:02:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.565 20:02:35 -- accel/accel.sh@42 -- # jq -r . 00:07:37.565 [2024-04-25 20:02:35.193203] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:37.565 [2024-04-25 20:02:35.193274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059555 ] 00:07:37.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.565 [2024-04-25 20:02:35.300401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.565 [2024-04-25 20:02:35.399610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.945 20:02:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.945 00:07:38.945 SPDK Configuration: 00:07:38.945 Core mask: 0x1 00:07:38.945 00:07:38.945 Accel Perf Configuration: 00:07:38.945 Workload Type: decompress 00:07:38.945 Transfer size: 4096 bytes 00:07:38.945 Vector count 1 00:07:38.945 Module: software 00:07:38.945 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:38.945 Queue depth: 32 00:07:38.945 Allocate depth: 32 00:07:38.945 # threads/core: 1 00:07:38.945 Run time: 1 seconds 00:07:38.945 Verify: Yes 00:07:38.945 00:07:38.945 Running for 1 seconds... 00:07:38.945 00:07:38.945 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.945 ------------------------------------------------------------------------------------ 00:07:38.945 0,0 57440/s 105 MiB/s 0 0 00:07:38.945 ==================================================================================== 00:07:38.945 Total 57440/s 224 MiB/s 0 0' 00:07:38.945 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:38.945 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:38.945 20:02:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:38.945 20:02:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.946 20:02:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y 00:07:38.946 20:02:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.946 20:02:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.946 20:02:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.946 20:02:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.946 20:02:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.946 20:02:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.946 20:02:36 -- accel/accel.sh@42 -- # jq -r . 00:07:38.946 [2024-04-25 20:02:36.664412] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:38.946 [2024-04-25 20:02:36.664486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059733 ] 00:07:38.946 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.946 [2024-04-25 20:02:36.772177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.946 [2024-04-25 20:02:36.869640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=0x1 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=decompress 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=software 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=32 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=32 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=1 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val=Yes 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:39.206 20:02:36 -- accel/accel.sh@21 -- # val= 00:07:39.206 20:02:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # IFS=: 00:07:39.206 20:02:36 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@21 -- # val= 00:07:40.588 20:02:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # IFS=: 00:07:40.588 20:02:38 -- accel/accel.sh@20 -- # read -r var val 00:07:40.588 20:02:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.588 20:02:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.588 20:02:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.588 00:07:40.588 real 0m2.941s 00:07:40.588 user 0m2.626s 00:07:40.588 sys 0m0.321s 00:07:40.588 20:02:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.588 20:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.588 ************************************ 00:07:40.588 END TEST accel_decomp 00:07:40.588 ************************************ 00:07:40.588 20:02:38 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.588 20:02:38 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.588 20:02:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.588 20:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:40.588 ************************************ 00:07:40.588 START TEST accel_decmop_full 00:07:40.588 ************************************ 00:07:40.588 20:02:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.588 20:02:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.588 20:02:38 -- accel/accel.sh@17 -- # local accel_module 00:07:40.588 20:02:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.588 20:02:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:40.588 20:02:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.588 20:02:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.588 20:02:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.588 20:02:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.588 20:02:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.588 20:02:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.588 20:02:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.588 20:02:38 -- accel/accel.sh@42 -- # jq -r . 00:07:40.588 [2024-04-25 20:02:38.171811] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:40.588 [2024-04-25 20:02:38.171865] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059994 ] 00:07:40.588 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.588 [2024-04-25 20:02:38.265539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.588 [2024-04-25 20:02:38.366369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.968 20:02:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.968 00:07:41.968 SPDK Configuration: 00:07:41.968 Core mask: 0x1 00:07:41.968 00:07:41.968 Accel Perf Configuration: 00:07:41.968 Workload Type: decompress 00:07:41.968 Transfer size: 111250 bytes 00:07:41.968 Vector count 1 00:07:41.968 Module: software 00:07:41.968 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:41.968 Queue depth: 32 00:07:41.968 Allocate depth: 32 00:07:41.968 # threads/core: 1 00:07:41.968 Run time: 1 seconds 00:07:41.968 Verify: Yes 00:07:41.968 00:07:41.968 Running for 1 seconds... 00:07:41.968 00:07:41.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.968 ------------------------------------------------------------------------------------ 00:07:41.968 0,0 3808/s 157 MiB/s 0 0 00:07:41.968 ==================================================================================== 00:07:41.968 Total 3808/s 404 MiB/s 0 0' 00:07:41.968 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.968 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.968 20:02:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.968 20:02:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.968 20:02:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.968 20:02:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.968 20:02:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.968 20:02:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.968 20:02:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.968 20:02:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.968 20:02:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.968 20:02:39 -- accel/accel.sh@42 -- # jq -r . 00:07:41.968 [2024-04-25 20:02:39.629708] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:41.968 [2024-04-25 20:02:39.629778] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060225 ] 00:07:41.968 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.968 [2024-04-25 20:02:39.734425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.968 [2024-04-25 20:02:39.832657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.968 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.968 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.968 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=0x1 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=decompress 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=software 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=32 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=32 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=1 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val=Yes 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:41.969 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:41.969 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.969 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:42.228 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:42.228 20:02:39 -- accel/accel.sh@21 -- # val= 00:07:42.228 20:02:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.228 20:02:39 -- accel/accel.sh@20 -- # IFS=: 00:07:42.228 20:02:39 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@21 -- # val= 00:07:43.165 20:02:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # IFS=: 00:07:43.165 20:02:41 -- accel/accel.sh@20 -- # read -r var val 00:07:43.165 20:02:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.165 20:02:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.165 20:02:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.165 00:07:43.165 real 0m2.924s 00:07:43.165 user 0m2.610s 00:07:43.165 sys 0m0.318s 00:07:43.165 20:02:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.165 20:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.165 ************************************ 00:07:43.165 END TEST accel_decmop_full 00:07:43.166 ************************************ 00:07:43.425 20:02:41 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.425 20:02:41 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:43.425 20:02:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.425 20:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:43.425 ************************************ 00:07:43.425 START TEST accel_decomp_mcore 00:07:43.425 ************************************ 00:07:43.425 20:02:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.425 20:02:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.425 20:02:41 -- accel/accel.sh@17 -- # local accel_module 00:07:43.425 20:02:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.425 20:02:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:43.425 20:02:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.425 20:02:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.425 20:02:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.425 20:02:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.425 20:02:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.425 20:02:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.425 20:02:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.425 20:02:41 -- accel/accel.sh@42 -- # jq -r . 00:07:43.425 [2024-04-25 20:02:41.158333] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:43.425 [2024-04-25 20:02:41.158418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060476 ] 00:07:43.425 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.425 [2024-04-25 20:02:41.265866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.684 [2024-04-25 20:02:41.372582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.684 [2024-04-25 20:02:41.372682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.684 [2024-04-25 20:02:41.372712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.684 [2024-04-25 20:02:41.372715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.061 20:02:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.061 00:07:45.061 SPDK Configuration: 00:07:45.061 Core mask: 0xf 00:07:45.061 00:07:45.061 Accel Perf Configuration: 00:07:45.061 Workload Type: decompress 00:07:45.061 Transfer size: 4096 bytes 00:07:45.061 Vector count 1 00:07:45.061 Module: software 00:07:45.061 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:45.061 Queue depth: 32 00:07:45.061 Allocate depth: 32 00:07:45.061 # threads/core: 1 00:07:45.061 Run time: 1 seconds 00:07:45.061 Verify: Yes 00:07:45.061 00:07:45.061 Running for 1 seconds... 00:07:45.061 00:07:45.061 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.061 ------------------------------------------------------------------------------------ 00:07:45.061 0,0 50592/s 93 MiB/s 0 0 00:07:45.061 3,0 50880/s 93 MiB/s 0 0 00:07:45.061 2,0 71104/s 131 MiB/s 0 0 00:07:45.061 1,0 50880/s 93 MiB/s 0 0 00:07:45.061 ==================================================================================== 00:07:45.061 Total 223456/s 872 MiB/s 0 0' 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.061 20:02:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.061 20:02:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.061 20:02:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.061 20:02:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.061 20:02:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.061 20:02:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.061 20:02:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.061 20:02:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.061 20:02:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.061 20:02:42 -- accel/accel.sh@42 -- # jq -r . 00:07:45.061 [2024-04-25 20:02:42.656642] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:45.061 [2024-04-25 20:02:42.656715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060665 ] 00:07:45.061 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.061 [2024-04-25 20:02:42.765290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.061 [2024-04-25 20:02:42.867543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.061 [2024-04-25 20:02:42.867628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.061 [2024-04-25 20:02:42.867664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.061 [2024-04-25 20:02:42.867667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.061 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.061 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.061 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.061 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.061 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.061 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.061 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.061 20:02:42 -- accel/accel.sh@21 -- # val=0xf 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=decompress 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=software 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=32 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=32 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=1 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val=Yes 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:45.062 20:02:42 -- accel/accel.sh@21 -- # val= 00:07:45.062 20:02:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # IFS=: 00:07:45.062 20:02:42 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.438 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.438 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.438 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.438 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.438 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.438 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.438 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.439 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.439 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.439 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.439 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.439 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.439 20:02:44 -- accel/accel.sh@21 -- # val= 00:07:46.439 20:02:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # IFS=: 00:07:46.439 20:02:44 -- accel/accel.sh@20 -- # read -r var val 00:07:46.439 20:02:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:46.439 20:02:44 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:46.439 20:02:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.439 00:07:46.439 real 0m2.999s 00:07:46.439 user 0m9.434s 00:07:46.439 sys 0m0.378s 00:07:46.439 20:02:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.439 20:02:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.439 ************************************ 00:07:46.439 END TEST accel_decomp_mcore 00:07:46.439 ************************************ 00:07:46.439 20:02:44 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.439 20:02:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:46.439 20:02:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:46.439 20:02:44 -- common/autotest_common.sh@10 -- # set +x 00:07:46.439 ************************************ 00:07:46.439 START TEST accel_decomp_full_mcore 00:07:46.439 ************************************ 00:07:46.439 20:02:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.439 20:02:44 -- accel/accel.sh@16 -- # local accel_opc 00:07:46.439 20:02:44 -- accel/accel.sh@17 -- # local accel_module 00:07:46.439 20:02:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.439 20:02:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.439 20:02:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.439 20:02:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.439 20:02:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.439 20:02:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.439 20:02:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.439 20:02:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.439 20:02:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.439 20:02:44 -- accel/accel.sh@42 -- # jq -r . 00:07:46.439 [2024-04-25 20:02:44.199255] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:46.439 [2024-04-25 20:02:44.199324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060864 ] 00:07:46.439 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.439 [2024-04-25 20:02:44.305448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.697 [2024-04-25 20:02:44.408600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.697 [2024-04-25 20:02:44.408690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.697 [2024-04-25 20:02:44.408742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.697 [2024-04-25 20:02:44.408744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.071 20:02:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.071 00:07:48.071 SPDK Configuration: 00:07:48.071 Core mask: 0xf 00:07:48.071 00:07:48.071 Accel Perf Configuration: 00:07:48.071 Workload Type: decompress 00:07:48.071 Transfer size: 111250 bytes 00:07:48.071 Vector count 1 00:07:48.071 Module: software 00:07:48.071 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:48.071 Queue depth: 32 00:07:48.071 Allocate depth: 32 00:07:48.071 # threads/core: 1 00:07:48.071 Run time: 1 seconds 00:07:48.071 Verify: Yes 00:07:48.071 00:07:48.071 Running for 1 seconds... 00:07:48.071 00:07:48.071 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.071 ------------------------------------------------------------------------------------ 00:07:48.071 0,0 3776/s 155 MiB/s 0 0 00:07:48.071 3,0 3776/s 155 MiB/s 0 0 00:07:48.071 2,0 5568/s 230 MiB/s 0 0 00:07:48.071 1,0 3808/s 157 MiB/s 0 0 00:07:48.071 ==================================================================================== 00:07:48.071 Total 16928/s 1795 MiB/s 0 0' 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.071 20:02:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.071 20:02:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.071 20:02:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.071 20:02:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.071 20:02:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.071 20:02:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.071 20:02:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.071 20:02:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.071 20:02:45 -- accel/accel.sh@42 -- # jq -r . 00:07:48.071 [2024-04-25 20:02:45.704012] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:48.071 [2024-04-25 20:02:45.704083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061051 ] 00:07:48.071 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.071 [2024-04-25 20:02:45.812579] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.071 [2024-04-25 20:02:45.915120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.071 [2024-04-25 20:02:45.915206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.071 [2024-04-25 20:02:45.915323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.071 [2024-04-25 20:02:45.915324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val=0xf 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val=decompress 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val=software 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.071 20:02:45 -- accel/accel.sh@21 -- # val=32 00:07:48.071 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.071 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val=32 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val=1 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val=Yes 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:48.072 20:02:45 -- accel/accel.sh@21 -- # val= 00:07:48.072 20:02:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # IFS=: 00:07:48.072 20:02:45 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@21 -- # val= 00:07:49.447 20:02:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # IFS=: 00:07:49.447 20:02:47 -- accel/accel.sh@20 -- # read -r var val 00:07:49.447 20:02:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.447 20:02:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.447 20:02:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.447 00:07:49.447 real 0m3.011s 00:07:49.447 user 0m9.517s 00:07:49.447 sys 0m0.358s 00:07:49.447 20:02:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.447 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.447 ************************************ 00:07:49.447 END TEST accel_decomp_full_mcore 00:07:49.447 ************************************ 00:07:49.447 20:02:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.447 20:02:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:49.447 20:02:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.447 20:02:47 -- common/autotest_common.sh@10 -- # set +x 00:07:49.447 ************************************ 00:07:49.447 START TEST accel_decomp_mthread 00:07:49.447 ************************************ 00:07:49.447 20:02:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.447 20:02:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.447 20:02:47 -- accel/accel.sh@17 -- # local accel_module 00:07:49.447 20:02:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.447 20:02:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.447 20:02:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.447 20:02:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.447 20:02:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.447 20:02:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.447 20:02:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.447 20:02:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.447 20:02:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.447 20:02:47 -- accel/accel.sh@42 -- # jq -r . 00:07:49.447 [2024-04-25 20:02:47.252095] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:49.447 [2024-04-25 20:02:47.252176] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061256 ] 00:07:49.447 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.447 [2024-04-25 20:02:47.356616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.706 [2024-04-25 20:02:47.453037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.082 20:02:48 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.082 00:07:51.082 SPDK Configuration: 00:07:51.082 Core mask: 0x1 00:07:51.082 00:07:51.082 Accel Perf Configuration: 00:07:51.082 Workload Type: decompress 00:07:51.082 Transfer size: 4096 bytes 00:07:51.082 Vector count 1 00:07:51.082 Module: software 00:07:51.082 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:51.082 Queue depth: 32 00:07:51.082 Allocate depth: 32 00:07:51.082 # threads/core: 2 00:07:51.082 Run time: 1 seconds 00:07:51.082 Verify: Yes 00:07:51.082 00:07:51.082 Running for 1 seconds... 00:07:51.082 00:07:51.082 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.082 ------------------------------------------------------------------------------------ 00:07:51.082 0,1 29024/s 53 MiB/s 0 0 00:07:51.082 0,0 28896/s 53 MiB/s 0 0 00:07:51.082 ==================================================================================== 00:07:51.082 Total 57920/s 226 MiB/s 0 0' 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.082 20:02:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.082 20:02:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.082 20:02:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.082 20:02:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.082 20:02:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.082 20:02:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.082 20:02:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.082 20:02:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.082 20:02:48 -- accel/accel.sh@42 -- # jq -r . 00:07:51.082 [2024-04-25 20:02:48.717126] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:51.082 [2024-04-25 20:02:48.717201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061440 ] 00:07:51.082 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.082 [2024-04-25 20:02:48.820416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.082 [2024-04-25 20:02:48.917560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val=0x1 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.082 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.082 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.082 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=decompress 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=software 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=32 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=32 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=2 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val=Yes 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:51.083 20:02:48 -- accel/accel.sh@21 -- # val= 00:07:51.083 20:02:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # IFS=: 00:07:51.083 20:02:48 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@21 -- # val= 00:07:52.461 20:02:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # IFS=: 00:07:52.461 20:02:50 -- accel/accel.sh@20 -- # read -r var val 00:07:52.461 20:02:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.461 20:02:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.461 20:02:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.461 00:07:52.461 real 0m2.937s 00:07:52.461 user 0m2.619s 00:07:52.461 sys 0m0.323s 00:07:52.461 20:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.461 20:02:50 -- common/autotest_common.sh@10 -- # set +x 00:07:52.461 ************************************ 00:07:52.461 END TEST accel_decomp_mthread 00:07:52.461 ************************************ 00:07:52.461 20:02:50 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 20:02:50 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:52.461 20:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.461 20:02:50 -- common/autotest_common.sh@10 -- # set +x 00:07:52.461 ************************************ 00:07:52.461 START TEST accel_deomp_full_mthread 00:07:52.461 ************************************ 00:07:52.461 20:02:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 20:02:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.461 20:02:50 -- accel/accel.sh@17 -- # local accel_module 00:07:52.461 20:02:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 20:02:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.461 20:02:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.461 20:02:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.461 20:02:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.461 20:02:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.461 20:02:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.461 20:02:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.461 20:02:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.461 20:02:50 -- accel/accel.sh@42 -- # jq -r . 00:07:52.461 [2024-04-25 20:02:50.231992] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:52.461 [2024-04-25 20:02:50.232063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061638 ] 00:07:52.461 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.461 [2024-04-25 20:02:50.339292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.765 [2024-04-25 20:02:50.438816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.178 20:02:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:54.178 00:07:54.178 SPDK Configuration: 00:07:54.178 Core mask: 0x1 00:07:54.178 00:07:54.178 Accel Perf Configuration: 00:07:54.178 Workload Type: decompress 00:07:54.178 Transfer size: 111250 bytes 00:07:54.178 Vector count 1 00:07:54.178 Module: software 00:07:54.178 File Name: /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:54.178 Queue depth: 32 00:07:54.178 Allocate depth: 32 00:07:54.178 # threads/core: 2 00:07:54.178 Run time: 1 seconds 00:07:54.178 Verify: Yes 00:07:54.178 00:07:54.178 Running for 1 seconds... 00:07:54.178 00:07:54.178 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.178 ------------------------------------------------------------------------------------ 00:07:54.178 0,1 1952/s 80 MiB/s 0 0 00:07:54.178 0,0 1920/s 79 MiB/s 0 0 00:07:54.178 ==================================================================================== 00:07:54.178 Total 3872/s 410 MiB/s 0 0' 00:07:54.178 20:02:51 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:51 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.178 20:02:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.178 20:02:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.178 20:02:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.178 20:02:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.178 20:02:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.178 20:02:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.178 20:02:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.178 20:02:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.178 20:02:51 -- accel/accel.sh@42 -- # jq -r . 00:07:54.178 [2024-04-25 20:02:51.742942] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:54.178 [2024-04-25 20:02:51.743031] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2061883 ] 00:07:54.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.178 [2024-04-25 20:02:51.847616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.178 [2024-04-25 20:02:51.948416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=0x1 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=decompress 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=software 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=32 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=32 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=2 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.178 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.178 20:02:52 -- accel/accel.sh@21 -- # val=Yes 00:07:54.178 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.179 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.179 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:54.179 20:02:52 -- accel/accel.sh@21 -- # val= 00:07:54.179 20:02:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # IFS=: 00:07:54.179 20:02:52 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@21 -- # val= 00:07:55.557 20:02:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # IFS=: 00:07:55.557 20:02:53 -- accel/accel.sh@20 -- # read -r var val 00:07:55.557 20:02:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.557 20:02:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.557 20:02:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.557 00:07:55.557 real 0m3.026s 00:07:55.557 user 0m2.687s 00:07:55.557 sys 0m0.343s 00:07:55.557 20:02:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.557 20:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.557 ************************************ 00:07:55.557 END TEST accel_deomp_full_mthread 00:07:55.557 ************************************ 00:07:55.557 20:02:53 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:55.557 20:02:53 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.557 20:02:53 -- accel/accel.sh@129 -- # build_accel_config 00:07:55.557 20:02:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.557 20:02:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.557 20:02:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.557 20:02:53 -- common/autotest_common.sh@10 -- # set +x 00:07:55.557 20:02:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.557 20:02:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.557 20:02:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.557 20:02:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.557 20:02:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.557 20:02:53 -- accel/accel.sh@42 -- # jq -r . 00:07:55.557 ************************************ 00:07:55.557 START TEST accel_dif_functional_tests 00:07:55.557 ************************************ 00:07:55.557 20:02:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.557 [2024-04-25 20:02:53.327500] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:55.557 [2024-04-25 20:02:53.327571] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062182 ] 00:07:55.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.557 [2024-04-25 20:02:53.432213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.816 [2024-04-25 20:02:53.533238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.816 [2024-04-25 20:02:53.533324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.816 [2024-04-25 20:02:53.533328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.816 [2024-04-25 20:02:53.743416] 'OCF_Core' volume operations registered 00:07:55.816 [2024-04-25 20:02:53.746910] 'OCF_Cache' volume operations registered 00:07:55.816 [2024-04-25 20:02:53.750869] 'OCF Composite' volume operations registered 00:07:56.075 [2024-04-25 20:02:53.754355] 'SPDK_block_device' volume operations registered 00:07:56.075 00:07:56.075 00:07:56.075 CUnit - A unit testing framework for C - Version 2.1-3 00:07:56.075 http://cunit.sourceforge.net/ 00:07:56.075 00:07:56.075 00:07:56.075 Suite: accel_dif 00:07:56.075 Test: verify: DIF generated, GUARD check ...passed 00:07:56.075 Test: verify: DIF generated, APPTAG check ...passed 00:07:56.075 Test: verify: DIF generated, REFTAG check ...passed 00:07:56.075 Test: verify: DIF not generated, GUARD check ...[2024-04-25 20:02:53.758790] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.075 [2024-04-25 20:02:53.758836] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:56.075 passed 00:07:56.075 Test: verify: DIF not generated, APPTAG check ...[2024-04-25 20:02:53.758875] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.075 [2024-04-25 20:02:53.758899] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:56.075 passed 00:07:56.075 Test: verify: DIF not generated, REFTAG check ...[2024-04-25 20:02:53.758926] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.075 [2024-04-25 20:02:53.758948] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:56.075 passed 00:07:56.075 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:56.075 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-25 20:02:53.759010] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:56.075 passed 00:07:56.075 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:56.075 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:56.075 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:56.075 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-25 20:02:53.759159] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:56.075 passed 00:07:56.075 Test: generate copy: DIF generated, GUARD check ...passed 00:07:56.075 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:56.075 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:56.075 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:56.075 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:56.075 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:56.075 Test: generate copy: iovecs-len validate ...[2024-04-25 20:02:53.759399] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:56.075 passed 00:07:56.075 Test: generate copy: buffer alignment validate ...passed 00:07:56.075 00:07:56.075 Run Summary: Type Total Ran Passed Failed Inactive 00:07:56.075 suites 1 1 n/a 0 0 00:07:56.075 tests 20 20 20 0 0 00:07:56.075 asserts 204 204 204 0 n/a 00:07:56.075 00:07:56.075 Elapsed time = 0.002 seconds 00:07:56.334 00:07:56.334 real 0m0.844s 00:07:56.335 user 0m1.507s 00:07:56.335 sys 0m0.293s 00:07:56.335 20:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.335 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 ************************************ 00:07:56.335 END TEST accel_dif_functional_tests 00:07:56.335 ************************************ 00:07:56.335 00:07:56.335 real 1m3.788s 00:07:56.335 user 1m10.550s 00:07:56.335 sys 0m8.883s 00:07:56.335 20:02:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.335 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 ************************************ 00:07:56.335 END TEST accel 00:07:56.335 ************************************ 00:07:56.335 20:02:54 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.335 20:02:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.335 20:02:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.335 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.335 ************************************ 00:07:56.335 START TEST accel_rpc 00:07:56.335 ************************************ 00:07:56.335 20:02:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.593 * Looking for test storage... 00:07:56.593 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel 00:07:56.593 20:02:54 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.593 20:02:54 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2062267 00:07:56.593 20:02:54 -- accel/accel_rpc.sh@15 -- # waitforlisten 2062267 00:07:56.593 20:02:54 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.593 20:02:54 -- common/autotest_common.sh@819 -- # '[' -z 2062267 ']' 00:07:56.593 20:02:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.593 20:02:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.593 20:02:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.593 20:02:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.594 20:02:54 -- common/autotest_common.sh@10 -- # set +x 00:07:56.594 [2024-04-25 20:02:54.379847] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:56.594 [2024-04-25 20:02:54.379945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062267 ] 00:07:56.594 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.594 [2024-04-25 20:02:54.488446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.852 [2024-04-25 20:02:54.588597] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.852 [2024-04-25 20:02:54.588761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.420 20:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:57.420 20:02:55 -- common/autotest_common.sh@852 -- # return 0 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:57.420 20:02:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.420 20:02:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.420 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.420 ************************************ 00:07:57.420 START TEST accel_assign_opcode 00:07:57.420 ************************************ 00:07:57.420 20:02:55 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:57.420 20:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.420 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.420 [2024-04-25 20:02:55.306998] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:57.420 20:02:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:57.420 20:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.420 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.420 [2024-04-25 20:02:55.315010] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:57.420 20:02:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.420 20:02:55 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:57.420 20:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.420 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.679 [2024-04-25 20:02:55.511554] 'OCF_Core' volume operations registered 00:07:57.679 [2024-04-25 20:02:55.515047] 'OCF_Cache' volume operations registered 00:07:57.679 [2024-04-25 20:02:55.518993] 'OCF Composite' volume operations registered 00:07:57.679 [2024-04-25 20:02:55.522537] 'SPDK_block_device' volume operations registered 00:07:57.938 20:02:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.938 20:02:55 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:57.938 20:02:55 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:57.938 20:02:55 -- accel/accel_rpc.sh@42 -- # grep software 00:07:57.938 20:02:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:57.938 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.938 20:02:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:57.938 software 00:07:57.938 00:07:57.938 real 0m0.403s 00:07:57.938 user 0m0.045s 00:07:57.938 sys 0m0.012s 00:07:57.938 20:02:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.938 20:02:55 -- common/autotest_common.sh@10 -- # set +x 00:07:57.938 ************************************ 00:07:57.938 END TEST accel_assign_opcode 00:07:57.938 ************************************ 00:07:57.938 20:02:55 -- accel/accel_rpc.sh@55 -- # killprocess 2062267 00:07:57.938 20:02:55 -- common/autotest_common.sh@926 -- # '[' -z 2062267 ']' 00:07:57.938 20:02:55 -- common/autotest_common.sh@930 -- # kill -0 2062267 00:07:57.938 20:02:55 -- common/autotest_common.sh@931 -- # uname 00:07:57.938 20:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.938 20:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2062267 00:07:57.938 20:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.938 20:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.938 20:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2062267' 00:07:57.938 killing process with pid 2062267 00:07:57.938 20:02:55 -- common/autotest_common.sh@945 -- # kill 2062267 00:07:57.938 20:02:55 -- common/autotest_common.sh@950 -- # wait 2062267 00:07:58.507 00:07:58.507 real 0m2.153s 00:07:58.507 user 0m2.110s 00:07:58.507 sys 0m0.647s 00:07:58.507 20:02:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.507 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.507 ************************************ 00:07:58.507 END TEST accel_rpc 00:07:58.507 ************************************ 00:07:58.507 20:02:56 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.507 20:02:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.507 20:02:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.507 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.507 ************************************ 00:07:58.507 START TEST app_cmdline 00:07:58.507 ************************************ 00:07:58.507 20:02:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh 00:07:58.766 * Looking for test storage... 00:07:58.766 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:07:58.766 20:02:56 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:58.766 20:02:56 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2062676 00:07:58.766 20:02:56 -- app/cmdline.sh@18 -- # waitforlisten 2062676 00:07:58.766 20:02:56 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:58.766 20:02:56 -- common/autotest_common.sh@819 -- # '[' -z 2062676 ']' 00:07:58.766 20:02:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.766 20:02:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.766 20:02:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.766 20:02:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.766 20:02:56 -- common/autotest_common.sh@10 -- # set +x 00:07:58.766 [2024-04-25 20:02:56.576630] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:07:58.766 [2024-04-25 20:02:56.576729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062676 ] 00:07:58.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.766 [2024-04-25 20:02:56.680786] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.025 [2024-04-25 20:02:56.783561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.025 [2024-04-25 20:02:56.783720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.285 [2024-04-25 20:02:56.981203] 'OCF_Core' volume operations registered 00:07:59.285 [2024-04-25 20:02:56.984680] 'OCF_Cache' volume operations registered 00:07:59.285 [2024-04-25 20:02:56.988624] 'OCF Composite' volume operations registered 00:07:59.285 [2024-04-25 20:02:56.992112] 'SPDK_block_device' volume operations registered 00:07:59.853 20:02:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.853 20:02:57 -- common/autotest_common.sh@852 -- # return 0 00:07:59.853 20:02:57 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:59.853 { 00:07:59.853 "version": "SPDK v24.01.1-pre git sha1 36faa8c31", 00:07:59.853 "fields": { 00:07:59.853 "major": 24, 00:07:59.853 "minor": 1, 00:07:59.853 "patch": 1, 00:07:59.853 "suffix": "-pre", 00:07:59.853 "commit": "36faa8c31" 00:07:59.853 } 00:07:59.853 } 00:07:59.853 20:02:57 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:59.853 20:02:57 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:59.853 20:02:57 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:59.853 20:02:57 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:59.853 20:02:57 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:59.853 20:02:57 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:59.853 20:02:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.853 20:02:57 -- common/autotest_common.sh@10 -- # set +x 00:07:59.853 20:02:57 -- app/cmdline.sh@26 -- # sort 00:07:59.853 20:02:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.853 20:02:57 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:59.853 20:02:57 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:59.853 20:02:57 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.853 20:02:57 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.853 20:02:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:59.853 20:02:57 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:07:59.853 20:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.853 20:02:57 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:07:59.853 20:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.853 20:02:57 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:07:59.853 20:02:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.853 20:02:57 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:07:59.853 20:02:57 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.853 20:02:57 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.112 request: 00:08:00.112 { 00:08:00.112 "method": "env_dpdk_get_mem_stats", 00:08:00.112 "req_id": 1 00:08:00.112 } 00:08:00.112 Got JSON-RPC error response 00:08:00.112 response: 00:08:00.112 { 00:08:00.112 "code": -32601, 00:08:00.112 "message": "Method not found" 00:08:00.112 } 00:08:00.112 20:02:58 -- common/autotest_common.sh@643 -- # es=1 00:08:00.112 20:02:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:00.112 20:02:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:00.112 20:02:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:00.112 20:02:58 -- app/cmdline.sh@1 -- # killprocess 2062676 00:08:00.112 20:02:58 -- common/autotest_common.sh@926 -- # '[' -z 2062676 ']' 00:08:00.112 20:02:58 -- common/autotest_common.sh@930 -- # kill -0 2062676 00:08:00.112 20:02:58 -- common/autotest_common.sh@931 -- # uname 00:08:00.112 20:02:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.112 20:02:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2062676 00:08:00.372 20:02:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:00.372 20:02:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:00.372 20:02:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2062676' 00:08:00.372 killing process with pid 2062676 00:08:00.372 20:02:58 -- common/autotest_common.sh@945 -- # kill 2062676 00:08:00.372 20:02:58 -- common/autotest_common.sh@950 -- # wait 2062676 00:08:00.940 00:08:00.940 real 0m2.208s 00:08:00.940 user 0m2.530s 00:08:00.940 sys 0m0.676s 00:08:00.940 20:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.940 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:00.940 ************************************ 00:08:00.940 END TEST app_cmdline 00:08:00.940 ************************************ 00:08:00.940 20:02:58 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:08:00.940 20:02:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:00.940 20:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.941 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:00.941 ************************************ 00:08:00.941 START TEST version 00:08:00.941 ************************************ 00:08:00.941 20:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh 00:08:00.941 * Looking for test storage... 00:08:00.941 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app 00:08:00.941 20:02:58 -- app/version.sh@17 -- # get_header_version major 00:08:00.941 20:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:08:00.941 20:02:58 -- app/version.sh@14 -- # cut -f2 00:08:00.941 20:02:58 -- app/version.sh@14 -- # tr -d '"' 00:08:00.941 20:02:58 -- app/version.sh@17 -- # major=24 00:08:00.941 20:02:58 -- app/version.sh@18 -- # get_header_version minor 00:08:00.941 20:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:08:00.941 20:02:58 -- app/version.sh@14 -- # cut -f2 00:08:00.941 20:02:58 -- app/version.sh@14 -- # tr -d '"' 00:08:00.941 20:02:58 -- app/version.sh@18 -- # minor=1 00:08:00.941 20:02:58 -- app/version.sh@19 -- # get_header_version patch 00:08:00.941 20:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:08:00.941 20:02:58 -- app/version.sh@14 -- # cut -f2 00:08:00.941 20:02:58 -- app/version.sh@14 -- # tr -d '"' 00:08:00.941 20:02:58 -- app/version.sh@19 -- # patch=1 00:08:00.941 20:02:58 -- app/version.sh@20 -- # get_header_version suffix 00:08:00.941 20:02:58 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h 00:08:00.941 20:02:58 -- app/version.sh@14 -- # cut -f2 00:08:00.941 20:02:58 -- app/version.sh@14 -- # tr -d '"' 00:08:00.941 20:02:58 -- app/version.sh@20 -- # suffix=-pre 00:08:00.941 20:02:58 -- app/version.sh@22 -- # version=24.1 00:08:00.941 20:02:58 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:00.941 20:02:58 -- app/version.sh@25 -- # version=24.1.1 00:08:00.941 20:02:58 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:00.941 20:02:58 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python 00:08:00.941 20:02:58 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:00.941 20:02:58 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:00.941 20:02:58 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:00.941 00:08:00.941 real 0m0.193s 00:08:00.941 user 0m0.106s 00:08:00.941 sys 0m0.134s 00:08:00.941 20:02:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.941 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:00.941 ************************************ 00:08:00.941 END TEST version 00:08:00.941 ************************************ 00:08:01.200 20:02:58 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:01.200 20:02:58 -- spdk/autotest.sh@204 -- # uname -s 00:08:01.200 20:02:58 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:01.200 20:02:58 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:01.200 20:02:58 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:01.200 20:02:58 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:08:01.200 20:02:58 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:08:01.200 20:02:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:01.200 20:02:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.200 20:02:58 -- common/autotest_common.sh@10 -- # set +x 00:08:01.200 ************************************ 00:08:01.200 START TEST blockdev_nvme 00:08:01.200 ************************************ 00:08:01.200 20:02:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme 00:08:01.200 * Looking for test storage... 00:08:01.200 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:08:01.200 20:02:59 -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:01.200 20:02:59 -- bdev/nbd_common.sh@6 -- # set -e 00:08:01.200 20:02:59 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:01.200 20:02:59 -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:08:01.200 20:02:59 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:08:01.200 20:02:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:08:01.200 20:02:59 -- bdev/blockdev.sh@18 -- # : 00:08:01.200 20:02:59 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:01.200 20:02:59 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:01.200 20:02:59 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:01.200 20:02:59 -- bdev/blockdev.sh@672 -- # uname -s 00:08:01.200 20:02:59 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:01.200 20:02:59 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:01.200 20:02:59 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:08:01.200 20:02:59 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:01.200 20:02:59 -- bdev/blockdev.sh@682 -- # dek= 00:08:01.200 20:02:59 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:01.200 20:02:59 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:01.200 20:02:59 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:01.200 20:02:59 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:08:01.200 20:02:59 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:08:01.200 20:02:59 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:01.200 20:02:59 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2063157 00:08:01.200 20:02:59 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:01.200 20:02:59 -- bdev/blockdev.sh@47 -- # waitforlisten 2063157 00:08:01.200 20:02:59 -- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:08:01.200 20:02:59 -- common/autotest_common.sh@819 -- # '[' -z 2063157 ']' 00:08:01.200 20:02:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.200 20:02:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.200 20:02:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.200 20:02:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.200 20:02:59 -- common/autotest_common.sh@10 -- # set +x 00:08:01.200 [2024-04-25 20:02:59.098279] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:01.200 [2024-04-25 20:02:59.098343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063157 ] 00:08:01.200 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.460 [2024-04-25 20:02:59.191085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.460 [2024-04-25 20:02:59.287281] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:01.460 [2024-04-25 20:02:59.287445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.718 [2024-04-25 20:02:59.475846] 'OCF_Core' volume operations registered 00:08:01.718 [2024-04-25 20:02:59.479326] 'OCF_Cache' volume operations registered 00:08:01.718 [2024-04-25 20:02:59.483268] 'OCF Composite' volume operations registered 00:08:01.718 [2024-04-25 20:02:59.486754] 'SPDK_block_device' volume operations registered 00:08:02.286 20:02:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.286 20:02:59 -- common/autotest_common.sh@852 -- # return 0 00:08:02.286 20:02:59 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:02.286 20:02:59 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:08:02.286 20:02:59 -- bdev/blockdev.sh@79 -- # local json 00:08:02.286 20:02:59 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:02.286 20:02:59 -- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:02.286 20:03:00 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\''' 00:08:02.286 20:03:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.286 20:03:00 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:02 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:05.588 20:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.588 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:02 -- bdev/blockdev.sh@738 -- # cat 00:08:05.588 20:03:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:05.588 20:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.588 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:05.588 20:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.588 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:02 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:05.588 20:03:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.588 20:03:02 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:03 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:05.588 20:03:03 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:05.588 20:03:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:05.588 20:03:03 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:05.588 20:03:03 -- common/autotest_common.sh@10 -- # set +x 00:08:05.588 20:03:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:05.588 20:03:03 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:05.588 20:03:03 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8fff764e-b1e1-4b93-97eb-d37ba71d5cce"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 512,' ' "num_blocks": 7814037168,' ' "uuid": "8fff764e-b1e1-4b93-97eb-d37ba71d5cce",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:5e:00.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:5e:00.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x8086",' ' "model_number": "INTEL SSDPE2KX040T8",' ' "serial_number": "BTLJ83030AK84P0DGN",' ' "firmware_revision": "VDV10184",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 1,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.2"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:05.588 20:03:03 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:05.588 20:03:03 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:05.588 20:03:03 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:08:05.588 20:03:03 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:05.588 20:03:03 -- bdev/blockdev.sh@752 -- # killprocess 2063157 00:08:05.588 20:03:03 -- common/autotest_common.sh@926 -- # '[' -z 2063157 ']' 00:08:05.588 20:03:03 -- common/autotest_common.sh@930 -- # kill -0 2063157 00:08:05.588 20:03:03 -- common/autotest_common.sh@931 -- # uname 00:08:05.588 20:03:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:05.588 20:03:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2063157 00:08:05.588 20:03:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:05.588 20:03:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:05.588 20:03:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2063157' 00:08:05.588 killing process with pid 2063157 00:08:05.588 20:03:03 -- common/autotest_common.sh@945 -- # kill 2063157 00:08:05.588 20:03:03 -- common/autotest_common.sh@950 -- # wait 2063157 00:08:09.779 20:03:07 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.779 20:03:07 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:09.779 20:03:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:09.779 20:03:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:09.779 20:03:07 -- common/autotest_common.sh@10 -- # set +x 00:08:09.779 ************************************ 00:08:09.779 START TEST bdev_hello_world 00:08:09.779 ************************************ 00:08:09.779 20:03:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:09.779 [2024-04-25 20:03:07.368535] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:09.779 [2024-04-25 20:03:07.368606] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064776 ] 00:08:09.779 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.779 [2024-04-25 20:03:07.473991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.779 [2024-04-25 20:03:07.571764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.037 [2024-04-25 20:03:07.823617] 'OCF_Core' volume operations registered 00:08:10.037 [2024-04-25 20:03:07.827097] 'OCF_Cache' volume operations registered 00:08:10.037 [2024-04-25 20:03:07.831067] 'OCF Composite' volume operations registered 00:08:10.037 [2024-04-25 20:03:07.834556] 'SPDK_block_device' volume operations registered 00:08:13.324 [2024-04-25 20:03:10.699684] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:13.324 [2024-04-25 20:03:10.699725] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:13.324 [2024-04-25 20:03:10.699745] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:13.324 [2024-04-25 20:03:10.701965] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:13.324 [2024-04-25 20:03:10.702130] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:13.324 [2024-04-25 20:03:10.702148] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:13.324 [2024-04-25 20:03:10.702785] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:13.324 00:08:13.324 [2024-04-25 20:03:10.702805] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:17.531 00:08:17.531 real 0m7.449s 00:08:17.531 user 0m6.336s 00:08:17.531 sys 0m0.361s 00:08:17.531 20:03:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.531 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:17.531 ************************************ 00:08:17.531 END TEST bdev_hello_world 00:08:17.531 ************************************ 00:08:17.531 20:03:14 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:17.531 20:03:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.531 20:03:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.531 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:17.531 ************************************ 00:08:17.531 START TEST bdev_bounds 00:08:17.531 ************************************ 00:08:17.531 20:03:14 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:08:17.531 20:03:14 -- bdev/blockdev.sh@288 -- # bdevio_pid=2065779 00:08:17.531 20:03:14 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.531 20:03:14 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 2065779' 00:08:17.531 Process bdevio pid: 2065779 00:08:17.531 20:03:14 -- bdev/blockdev.sh@291 -- # waitforlisten 2065779 00:08:17.531 20:03:14 -- common/autotest_common.sh@819 -- # '[' -z 2065779 ']' 00:08:17.531 20:03:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.531 20:03:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.531 20:03:14 -- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:08:17.531 20:03:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.531 20:03:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.531 20:03:14 -- common/autotest_common.sh@10 -- # set +x 00:08:17.531 [2024-04-25 20:03:14.875845] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:17.531 [2024-04-25 20:03:14.875920] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2065779 ] 00:08:17.531 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.531 [2024-04-25 20:03:14.982744] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:17.531 [2024-04-25 20:03:15.086963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.531 [2024-04-25 20:03:15.087047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.531 [2024-04-25 20:03:15.087051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.531 [2024-04-25 20:03:15.321308] 'OCF_Core' volume operations registered 00:08:17.531 [2024-04-25 20:03:15.324778] 'OCF_Cache' volume operations registered 00:08:17.531 [2024-04-25 20:03:15.328680] 'OCF Composite' volume operations registered 00:08:17.531 [2024-04-25 20:03:15.332059] 'SPDK_block_device' volume operations registered 00:08:20.837 20:03:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.837 20:03:18 -- common/autotest_common.sh@852 -- # return 0 00:08:20.837 20:03:18 -- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:20.837 I/O targets: 00:08:20.837 Nvme0n1: 7814037168 blocks of 512 bytes (3815448 MiB) 00:08:20.837 00:08:20.837 00:08:20.837 CUnit - A unit testing framework for C - Version 2.1-3 00:08:20.837 http://cunit.sourceforge.net/ 00:08:20.837 00:08:20.837 00:08:20.837 Suite: bdevio tests on: Nvme0n1 00:08:20.837 Test: blockdev write read block ...passed 00:08:20.837 Test: blockdev write zeroes read block ...passed 00:08:20.837 Test: blockdev write zeroes read no split ...passed 00:08:20.837 Test: blockdev write zeroes read split ...passed 00:08:20.837 Test: blockdev write zeroes read split partial ...passed 00:08:20.837 Test: blockdev reset ...[2024-04-25 20:03:18.738167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:08:20.837 [2024-04-25 20:03:18.740606] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:20.837 passed 00:08:20.837 Test: blockdev write read 8 blocks ...passed 00:08:20.837 Test: blockdev write read size > 128k ...passed 00:08:20.837 Test: blockdev write read invalid size ...passed 00:08:20.837 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:20.837 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:20.837 Test: blockdev write read max offset ...passed 00:08:20.837 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:20.837 Test: blockdev writev readv 8 blocks ...passed 00:08:20.837 Test: blockdev writev readv 30 x 1block ...passed 00:08:20.837 Test: blockdev writev readv block ...passed 00:08:20.837 Test: blockdev writev readv size > 128k ...passed 00:08:20.837 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:20.837 Test: blockdev comparev and writev ...passed 00:08:20.837 Test: blockdev nvme passthru rw ...passed 00:08:20.837 Test: blockdev nvme passthru vendor specific ...[2024-04-25 20:03:18.756114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:894 PRP1 0x0 PRP2 0x0 00:08:20.837 [2024-04-25 20:03:18.756148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:894 cdw0:0 sqhd:0056 p:1 m:0 dnr:1 00:08:20.837 passed 00:08:20.837 Test: blockdev nvme admin passthru ...passed 00:08:20.837 Test: blockdev copy ...passed 00:08:20.837 00:08:20.837 Run Summary: Type Total Ran Passed Failed Inactive 00:08:20.837 suites 1 1 n/a 0 0 00:08:20.837 tests 23 23 23 0 0 00:08:20.837 asserts 140 140 140 0 n/a 00:08:20.837 00:08:20.837 Elapsed time = 0.107 seconds 00:08:20.837 0 00:08:21.096 20:03:18 -- bdev/blockdev.sh@293 -- # killprocess 2065779 00:08:21.096 20:03:18 -- common/autotest_common.sh@926 -- # '[' -z 2065779 ']' 00:08:21.096 20:03:18 -- common/autotest_common.sh@930 -- # kill -0 2065779 00:08:21.096 20:03:18 -- common/autotest_common.sh@931 -- # uname 00:08:21.096 20:03:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:21.096 20:03:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2065779 00:08:21.096 20:03:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:21.096 20:03:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:21.096 20:03:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2065779' 00:08:21.096 killing process with pid 2065779 00:08:21.096 20:03:18 -- common/autotest_common.sh@945 -- # kill 2065779 00:08:21.096 20:03:18 -- common/autotest_common.sh@950 -- # wait 2065779 00:08:25.287 20:03:22 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:08:25.287 00:08:25.287 real 0m8.069s 00:08:25.287 user 0m23.357s 00:08:25.287 sys 0m0.648s 00:08:25.287 20:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.287 20:03:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.287 ************************************ 00:08:25.287 END TEST bdev_bounds 00:08:25.287 ************************************ 00:08:25.287 20:03:22 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:08:25.287 20:03:22 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:25.287 20:03:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.287 20:03:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.287 ************************************ 00:08:25.287 START TEST bdev_nbd 00:08:25.287 ************************************ 00:08:25.287 20:03:22 -- common/autotest_common.sh@1104 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 '' 00:08:25.287 20:03:22 -- bdev/blockdev.sh@298 -- # uname -s 00:08:25.287 20:03:22 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:08:25.287 20:03:22 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.287 20:03:22 -- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:08:25.287 20:03:22 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:08:25.287 20:03:22 -- bdev/blockdev.sh@302 -- # local bdev_all 00:08:25.287 20:03:22 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:08:25.287 20:03:22 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:08:25.287 20:03:22 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:25.287 20:03:22 -- bdev/blockdev.sh@309 -- # local nbd_all 00:08:25.287 20:03:22 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:08:25.287 20:03:22 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:08:25.287 20:03:22 -- bdev/blockdev.sh@312 -- # local nbd_list 00:08:25.287 20:03:22 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:08:25.287 20:03:22 -- bdev/blockdev.sh@313 -- # local bdev_list 00:08:25.287 20:03:22 -- bdev/blockdev.sh@316 -- # nbd_pid=2066882 00:08:25.287 20:03:22 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:25.287 20:03:22 -- bdev/blockdev.sh@318 -- # waitforlisten 2066882 /var/tmp/spdk-nbd.sock 00:08:25.287 20:03:22 -- common/autotest_common.sh@819 -- # '[' -z 2066882 ']' 00:08:25.287 20:03:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:25.287 20:03:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:25.287 20:03:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:25.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:25.287 20:03:22 -- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:08:25.287 20:03:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:25.287 20:03:22 -- common/autotest_common.sh@10 -- # set +x 00:08:25.287 [2024-04-25 20:03:22.998366] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:25.287 [2024-04-25 20:03:22.998437] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.287 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.287 [2024-04-25 20:03:23.107093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.287 [2024-04-25 20:03:23.207958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.545 [2024-04-25 20:03:23.455031] 'OCF_Core' volume operations registered 00:08:25.545 [2024-04-25 20:03:23.458522] 'OCF_Cache' volume operations registered 00:08:25.545 [2024-04-25 20:03:23.462479] 'OCF Composite' volume operations registered 00:08:25.545 [2024-04-25 20:03:23.465991] 'SPDK_block_device' volume operations registered 00:08:28.831 20:03:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:28.831 20:03:26 -- common/autotest_common.sh@852 -- # return 0 00:08:28.831 20:03:26 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@24 -- # local i 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:08:28.831 20:03:26 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:29.089 20:03:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:29.089 20:03:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:29.089 20:03:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:29.089 20:03:26 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:29.089 20:03:26 -- common/autotest_common.sh@857 -- # local i 00:08:29.089 20:03:26 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:29.089 20:03:26 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:29.089 20:03:26 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:29.089 20:03:26 -- common/autotest_common.sh@861 -- # break 00:08:29.089 20:03:26 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:29.089 20:03:26 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:29.089 20:03:26 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:29.089 1+0 records in 00:08:29.089 1+0 records out 00:08:29.089 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405926 s, 10.1 MB/s 00:08:29.090 20:03:26 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:08:29.090 20:03:27 -- common/autotest_common.sh@874 -- # size=4096 00:08:29.090 20:03:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:08:29.090 20:03:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:29.090 20:03:27 -- common/autotest_common.sh@877 -- # return 0 00:08:29.090 20:03:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:29.090 20:03:27 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:08:29.090 20:03:27 -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.348 20:03:27 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:29.348 { 00:08:29.348 "nbd_device": "/dev/nbd0", 00:08:29.348 "bdev_name": "Nvme0n1" 00:08:29.348 } 00:08:29.348 ]' 00:08:29.348 20:03:27 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:29.348 20:03:27 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:29.348 { 00:08:29.348 "nbd_device": "/dev/nbd0", 00:08:29.348 "bdev_name": "Nvme0n1" 00:08:29.348 } 00:08:29.348 ]' 00:08:29.348 20:03:27 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@51 -- # local i 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.607 20:03:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@41 -- # break 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:29.866 20:03:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@65 -- # true 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@122 -- # count=0 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@127 -- # return 0 00:08:30.125 20:03:27 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@12 -- # local i 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:30.125 20:03:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:30.383 /dev/nbd0 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:30.383 20:03:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:08:30.383 20:03:28 -- common/autotest_common.sh@857 -- # local i 00:08:30.383 20:03:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:08:30.383 20:03:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:08:30.383 20:03:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:08:30.383 20:03:28 -- common/autotest_common.sh@861 -- # break 00:08:30.383 20:03:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:08:30.383 20:03:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:08:30.383 20:03:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:30.383 1+0 records in 00:08:30.383 1+0 records out 00:08:30.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419578 s, 9.8 MB/s 00:08:30.383 20:03:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:08:30.383 20:03:28 -- common/autotest_common.sh@874 -- # size=4096 00:08:30.383 20:03:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:08:30.383 20:03:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:08:30.383 20:03:28 -- common/autotest_common.sh@877 -- # return 0 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.383 20:03:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.641 20:03:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:30.641 { 00:08:30.642 "nbd_device": "/dev/nbd0", 00:08:30.642 "bdev_name": "Nvme0n1" 00:08:30.642 } 00:08:30.642 ]' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:30.642 { 00:08:30.642 "nbd_device": "/dev/nbd0", 00:08:30.642 "bdev_name": "Nvme0n1" 00:08:30.642 } 00:08:30.642 ]' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@65 -- # count=1 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@95 -- # count=1 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:30.642 256+0 records in 00:08:30.642 256+0 records out 00:08:30.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114999 s, 91.2 MB/s 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:30.642 256+0 records in 00:08:30.642 256+0 records out 00:08:30.642 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021129 s, 49.6 MB/s 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@51 -- # local i 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.642 20:03:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@41 -- # break 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.901 20:03:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.160 20:03:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.160 20:03:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.160 20:03:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@65 -- # true 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.160 20:03:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:31.160 20:03:29 -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.419 malloc_lvol_verify 00:08:31.419 20:03:29 -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.678 275f14ee-0c3d-4987-a553-f7ff01284b9d 00:08:31.678 20:03:29 -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:31.937 38e09d6a-bdf9-4d16-8732-d64905b29220 00:08:31.937 20:03:29 -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.195 /dev/nbd0 00:08:32.195 20:03:29 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:32.195 mke2fs 1.46.5 (30-Dec-2021) 00:08:32.195 Discarding device blocks: 0/4096 done 00:08:32.195 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.195 00:08:32.195 Allocating group tables: 0/1 done 00:08:32.195 Writing inode tables: 0/1 done 00:08:32.195 Creating journal (1024 blocks): done 00:08:32.195 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.196 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@51 -- # local i 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.196 20:03:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@41 -- # break 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:32.454 20:03:30 -- bdev/nbd_common.sh@147 -- # return 0 00:08:32.454 20:03:30 -- bdev/blockdev.sh@324 -- # killprocess 2066882 00:08:32.454 20:03:30 -- common/autotest_common.sh@926 -- # '[' -z 2066882 ']' 00:08:32.454 20:03:30 -- common/autotest_common.sh@930 -- # kill -0 2066882 00:08:32.454 20:03:30 -- common/autotest_common.sh@931 -- # uname 00:08:32.454 20:03:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:32.454 20:03:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2066882 00:08:32.454 20:03:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:32.454 20:03:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:32.454 20:03:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2066882' 00:08:32.454 killing process with pid 2066882 00:08:32.454 20:03:30 -- common/autotest_common.sh@945 -- # kill 2066882 00:08:32.454 20:03:30 -- common/autotest_common.sh@950 -- # wait 2066882 00:08:36.642 20:03:34 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:36.642 00:08:36.642 real 0m11.428s 00:08:36.642 user 0m13.415s 00:08:36.642 sys 0m1.902s 00:08:36.642 20:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.642 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 ************************************ 00:08:36.642 END TEST bdev_nbd 00:08:36.642 ************************************ 00:08:36.642 20:03:34 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:36.642 20:03:34 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:36.642 20:03:34 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:36.642 skipping fio tests on NVMe due to multi-ns failures. 00:08:36.642 20:03:34 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:36.642 20:03:34 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.642 20:03:34 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:36.642 20:03:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:36.642 20:03:34 -- common/autotest_common.sh@10 -- # set +x 00:08:36.642 ************************************ 00:08:36.642 START TEST bdev_verify 00:08:36.642 ************************************ 00:08:36.642 20:03:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:36.642 [2024-04-25 20:03:34.466088] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:36.642 [2024-04-25 20:03:34.466169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068523 ] 00:08:36.642 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.642 [2024-04-25 20:03:34.572883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.902 [2024-04-25 20:03:34.673663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.902 [2024-04-25 20:03:34.673670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.161 [2024-04-25 20:03:34.920002] 'OCF_Core' volume operations registered 00:08:37.161 [2024-04-25 20:03:34.923466] 'OCF_Cache' volume operations registered 00:08:37.161 [2024-04-25 20:03:34.927415] 'OCF Composite' volume operations registered 00:08:37.161 [2024-04-25 20:03:34.930912] 'SPDK_block_device' volume operations registered 00:08:40.448 Running I/O for 5 seconds... 00:08:45.730 00:08:45.730 Latency(us) 00:08:45.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.730 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:45.730 Verification LBA range: start 0x0 length 0x1d1c0beb 00:08:45.730 Nvme0n1 : 5.01 17515.39 68.42 0.00 0.00 7269.49 179.87 10143.83 00:08:45.730 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:45.730 Verification LBA range: start 0x1d1c0beb length 0x1d1c0beb 00:08:45.730 Nvme0n1 : 5.01 17639.36 68.90 0.00 0.00 7218.25 406.04 10599.74 00:08:45.730 =================================================================================================================== 00:08:45.730 Total : 35154.74 137.32 0.00 0.00 7243.77 179.87 10599.74 00:08:49.023 00:08:49.023 real 0m12.532s 00:08:49.023 user 0m23.465s 00:08:49.023 sys 0m0.413s 00:08:49.023 20:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.023 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:49.023 ************************************ 00:08:49.023 END TEST bdev_verify 00:08:49.023 ************************************ 00:08:49.283 20:03:46 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:49.283 20:03:46 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:49.283 20:03:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.283 20:03:46 -- common/autotest_common.sh@10 -- # set +x 00:08:49.283 ************************************ 00:08:49.283 START TEST bdev_verify_big_io 00:08:49.283 ************************************ 00:08:49.283 20:03:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:49.283 [2024-04-25 20:03:47.056100] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:08:49.283 [2024-04-25 20:03:47.056181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2070159 ] 00:08:49.283 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.283 [2024-04-25 20:03:47.165098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.542 [2024-04-25 20:03:47.269664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.542 [2024-04-25 20:03:47.269668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.802 [2024-04-25 20:03:47.520038] 'OCF_Core' volume operations registered 00:08:49.802 [2024-04-25 20:03:47.523455] 'OCF_Cache' volume operations registered 00:08:49.802 [2024-04-25 20:03:47.527273] 'OCF Composite' volume operations registered 00:08:49.802 [2024-04-25 20:03:47.530678] 'SPDK_block_device' volume operations registered 00:08:53.142 Running I/O for 5 seconds... 00:08:58.415 00:08:58.415 Latency(us) 00:08:58.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.415 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:58.415 Verification LBA range: start 0x0 length 0x1d1c0be 00:08:58.415 Nvme0n1 : 5.04 1301.69 81.36 0.00 0.00 96667.80 1731.01 141329.81 00:08:58.415 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:58.415 Verification LBA range: start 0x1d1c0be length 0x1d1c0be 00:08:58.415 Nvme0n1 : 5.05 1320.15 82.51 0.00 0.00 95308.63 1666.89 126740.93 00:08:58.415 =================================================================================================================== 00:08:58.415 Total : 2621.84 163.87 0.00 0.00 95982.83 1666.89 141329.81 00:09:01.705 00:09:01.705 real 0m12.560s 00:09:01.705 user 0m23.524s 00:09:01.705 sys 0m0.387s 00:09:01.705 20:03:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.705 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:09:01.705 ************************************ 00:09:01.705 END TEST bdev_verify_big_io 00:09:01.705 ************************************ 00:09:01.705 20:03:59 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.705 20:03:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:01.705 20:03:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:01.705 20:03:59 -- common/autotest_common.sh@10 -- # set +x 00:09:01.705 ************************************ 00:09:01.705 START TEST bdev_write_zeroes 00:09:01.705 ************************************ 00:09:01.705 20:03:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:01.964 [2024-04-25 20:03:59.666416] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:01.964 [2024-04-25 20:03:59.666498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071928 ] 00:09:01.964 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.964 [2024-04-25 20:03:59.771692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.964 [2024-04-25 20:03:59.869004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.222 [2024-04-25 20:04:00.108821] 'OCF_Core' volume operations registered 00:09:02.222 [2024-04-25 20:04:00.112098] 'OCF_Cache' volume operations registered 00:09:02.222 [2024-04-25 20:04:00.115784] 'OCF Composite' volume operations registered 00:09:02.222 [2024-04-25 20:04:00.119071] 'SPDK_block_device' volume operations registered 00:09:05.509 Running I/O for 1 seconds... 00:09:06.077 00:09:06.077 Latency(us) 00:09:06.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:06.077 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:06.077 Nvme0n1 : 1.00 61923.58 241.89 0.00 0.00 2060.18 737.28 2849.39 00:09:06.077 =================================================================================================================== 00:09:06.077 Total : 61923.58 241.89 0.00 0.00 2060.18 737.28 2849.39 00:09:10.269 00:09:10.269 real 0m8.442s 00:09:10.269 user 0m7.343s 00:09:10.269 sys 0m0.353s 00:09:10.269 20:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.269 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 ************************************ 00:09:10.269 END TEST bdev_write_zeroes 00:09:10.269 ************************************ 00:09:10.269 20:04:08 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.269 20:04:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:10.269 20:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.269 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:10.269 ************************************ 00:09:10.269 START TEST bdev_json_nonenclosed 00:09:10.269 ************************************ 00:09:10.269 20:04:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.269 [2024-04-25 20:04:08.157919] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:10.269 [2024-04-25 20:04:08.157991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073053 ] 00:09:10.269 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.529 [2024-04-25 20:04:08.263726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.529 [2024-04-25 20:04:08.360821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.529 [2024-04-25 20:04:08.360938] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:10.529 [2024-04-25 20:04:08.360961] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:10.788 00:09:10.788 real 0m0.368s 00:09:10.788 user 0m0.226s 00:09:10.788 sys 0m0.139s 00:09:10.788 20:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:10.788 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:10.788 ************************************ 00:09:10.788 END TEST bdev_json_nonenclosed 00:09:10.788 ************************************ 00:09:10.788 20:04:08 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.788 20:04:08 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:09:10.788 20:04:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:10.788 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:10.788 ************************************ 00:09:10.788 START TEST bdev_json_nonarray 00:09:10.788 ************************************ 00:09:10.788 20:04:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:10.788 [2024-04-25 20:04:08.579521] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:10.788 [2024-04-25 20:04:08.579605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073078 ] 00:09:10.788 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.788 [2024-04-25 20:04:08.688306] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.047 [2024-04-25 20:04:08.791679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.047 [2024-04-25 20:04:08.791806] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:11.047 [2024-04-25 20:04:08.791830] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.047 00:09:11.047 real 0m0.384s 00:09:11.047 user 0m0.245s 00:09:11.047 sys 0m0.136s 00:09:11.047 20:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.047 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:11.047 ************************************ 00:09:11.047 END TEST bdev_json_nonarray 00:09:11.047 ************************************ 00:09:11.047 20:04:08 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:11.047 20:04:08 -- bdev/blockdev.sh@809 -- # cleanup 00:09:11.047 20:04:08 -- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:09:11.047 20:04:08 -- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:09:11.047 20:04:08 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:09:11.047 20:04:08 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:09:11.047 00:09:11.047 real 1m10.037s 00:09:11.047 user 1m45.623s 00:09:11.047 sys 0m5.480s 00:09:11.047 20:04:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.047 20:04:08 -- common/autotest_common.sh@10 -- # set +x 00:09:11.047 ************************************ 00:09:11.047 END TEST blockdev_nvme 00:09:11.047 ************************************ 00:09:11.307 20:04:09 -- spdk/autotest.sh@219 -- # uname -s 00:09:11.307 20:04:09 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:09:11.308 20:04:09 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:09:11.308 20:04:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:11.308 20:04:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.308 20:04:09 -- common/autotest_common.sh@10 -- # set +x 00:09:11.308 ************************************ 00:09:11.308 START TEST blockdev_nvme_gpt 00:09:11.308 ************************************ 00:09:11.308 20:04:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt 00:09:11.308 * Looking for test storage... 00:09:11.308 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev 00:09:11.308 20:04:09 -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:11.308 20:04:09 -- bdev/nbd_common.sh@6 -- # set -e 00:09:11.308 20:04:09 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:11.308 20:04:09 -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:09:11.308 20:04:09 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json 00:09:11.308 20:04:09 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json 00:09:11.308 20:04:09 -- bdev/blockdev.sh@18 -- # : 00:09:11.308 20:04:09 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:09:11.308 20:04:09 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:09:11.308 20:04:09 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:09:11.308 20:04:09 -- bdev/blockdev.sh@672 -- # uname -s 00:09:11.308 20:04:09 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:09:11.308 20:04:09 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:09:11.308 20:04:09 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:09:11.308 20:04:09 -- bdev/blockdev.sh@681 -- # crypto_device= 00:09:11.308 20:04:09 -- bdev/blockdev.sh@682 -- # dek= 00:09:11.308 20:04:09 -- bdev/blockdev.sh@683 -- # env_ctx= 00:09:11.308 20:04:09 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:09:11.308 20:04:09 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:09:11.308 20:04:09 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:09:11.308 20:04:09 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:09:11.308 20:04:09 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:09:11.308 20:04:09 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2073309 00:09:11.308 20:04:09 -- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:09:11.308 20:04:09 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:11.308 20:04:09 -- bdev/blockdev.sh@47 -- # waitforlisten 2073309 00:09:11.308 20:04:09 -- common/autotest_common.sh@819 -- # '[' -z 2073309 ']' 00:09:11.308 20:04:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.308 20:04:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.308 20:04:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.308 20:04:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.308 20:04:09 -- common/autotest_common.sh@10 -- # set +x 00:09:11.308 [2024-04-25 20:04:09.190120] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:11.308 [2024-04-25 20:04:09.190208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073309 ] 00:09:11.308 EAL: No free 2048 kB hugepages reported on node 1 00:09:11.568 [2024-04-25 20:04:09.297416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.568 [2024-04-25 20:04:09.392008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:11.568 [2024-04-25 20:04:09.392161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.827 [2024-04-25 20:04:09.588565] 'OCF_Core' volume operations registered 00:09:11.827 [2024-04-25 20:04:09.592044] 'OCF_Cache' volume operations registered 00:09:11.827 [2024-04-25 20:04:09.595981] 'OCF Composite' volume operations registered 00:09:11.827 [2024-04-25 20:04:09.599460] 'SPDK_block_device' volume operations registered 00:09:12.394 20:04:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:12.394 20:04:10 -- common/autotest_common.sh@852 -- # return 0 00:09:12.394 20:04:10 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:09:12.394 20:04:10 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:09:12.394 20:04:10 -- bdev/blockdev.sh@102 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:09:15.680 Waiting for block devices as requested 00:09:15.680 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:09:15.680 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:15.680 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:15.680 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:15.680 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:15.680 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:15.939 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:15.939 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:15.939 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:16.198 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:09:16.198 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:09:16.198 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:09:16.456 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:09:16.456 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:09:16.456 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:09:16.715 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:09:16.715 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:09:16.715 20:04:14 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:09:16.716 20:04:14 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:09:16.716 20:04:14 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:09:16.716 20:04:14 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:09:16.716 20:04:14 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:09:16.716 20:04:14 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:09:16.716 20:04:14 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:09:16.716 20:04:14 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:16.716 20:04:14 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:09:16.716 20:04:14 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:5e:00.0/nvme/nvme0/nvme0n1') 00:09:16.716 20:04:14 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:09:16.716 20:04:14 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:09:16.716 20:04:14 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:16.716 20:04:14 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:09:16.716 20:04:14 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:09:16.716 20:04:14 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:09:16.716 20:04:14 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:09:16.716 BYT; 00:09:16.716 /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:;' 00:09:16.716 20:04:14 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:09:16.716 BYT; 00:09:16.716 /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:16.716 20:04:14 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:09:16.716 20:04:14 -- bdev/blockdev.sh@114 -- # break 00:09:16.716 20:04:14 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:09:16.716 20:04:14 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:16.716 20:04:14 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:16.716 20:04:14 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:16.716 20:04:14 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:09:16.716 20:04:14 -- scripts/common.sh@410 -- # local spdk_guid 00:09:16.716 20:04:14 -- scripts/common.sh@412 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:09:16.716 20:04:14 -- scripts/common.sh@414 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:09:16.716 20:04:14 -- scripts/common.sh@415 -- # IFS='()' 00:09:16.716 20:04:14 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:09:16.716 20:04:14 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:09:16.716 20:04:14 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:16.716 20:04:14 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:16.716 20:04:14 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:16.716 20:04:14 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:16.716 20:04:14 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:09:16.716 20:04:14 -- scripts/common.sh@422 -- # local spdk_guid 00:09:16.716 20:04:14 -- scripts/common.sh@424 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]] 00:09:16.716 20:04:14 -- scripts/common.sh@426 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:09:16.716 20:04:14 -- scripts/common.sh@427 -- # IFS='()' 00:09:16.716 20:04:14 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:09:16.716 20:04:14 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h 00:09:16.974 20:04:14 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:16.974 20:04:14 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:16.974 20:04:14 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:16.974 20:04:14 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:16.975 20:04:14 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:09:17.911 The operation has completed successfully. 00:09:17.911 20:04:15 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:09:18.847 The operation has completed successfully. 00:09:18.847 20:04:16 -- bdev/blockdev.sh@132 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:09:22.136 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:09:22.136 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:09:25.494 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:09:25.494 20:04:23 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:09:25.494 20:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.494 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:25.494 [] 00:09:25.494 20:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:25.494 20:04:23 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:09:25.494 20:04:23 -- bdev/blockdev.sh@79 -- # local json 00:09:25.494 20:04:23 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:09:25.494 20:04:23 -- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:09:25.494 20:04:23 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\''' 00:09:25.494 20:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:25.494 20:04:23 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:09:28.785 20:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.785 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@738 -- # cat 00:09:28.785 20:04:26 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:09:28.785 20:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.785 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:09:28.785 20:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.785 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:28.785 20:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.785 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:09:28.785 20:04:26 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:09:28.785 20:04:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:28.785 20:04:26 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:09:28.785 20:04:26 -- common/autotest_common.sh@10 -- # set +x 00:09:28.785 20:04:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:28.785 20:04:26 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:09:28.785 20:04:26 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 3907016704,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 2048,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 512,' ' "num_blocks": 3907016703,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 3907018752,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:09:28.785 20:04:26 -- bdev/blockdev.sh@747 -- # jq -r .name 00:09:28.785 20:04:26 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:09:28.785 20:04:26 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:09:28.785 20:04:26 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:09:28.785 20:04:26 -- bdev/blockdev.sh@752 -- # killprocess 2073309 00:09:28.785 20:04:26 -- common/autotest_common.sh@926 -- # '[' -z 2073309 ']' 00:09:28.785 20:04:26 -- common/autotest_common.sh@930 -- # kill -0 2073309 00:09:28.785 20:04:26 -- common/autotest_common.sh@931 -- # uname 00:09:28.785 20:04:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:28.785 20:04:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2073309 00:09:28.785 20:04:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:28.785 20:04:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:28.785 20:04:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2073309' 00:09:28.785 killing process with pid 2073309 00:09:28.785 20:04:26 -- common/autotest_common.sh@945 -- # kill 2073309 00:09:28.785 20:04:26 -- common/autotest_common.sh@950 -- # wait 2073309 00:09:32.981 20:04:30 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:32.981 20:04:30 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:32.981 20:04:30 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:09:32.981 20:04:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.981 20:04:30 -- common/autotest_common.sh@10 -- # set +x 00:09:32.981 ************************************ 00:09:32.981 START TEST bdev_hello_world 00:09:32.981 ************************************ 00:09:32.981 20:04:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:32.981 [2024-04-25 20:04:30.662552] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:32.981 [2024-04-25 20:04:30.662612] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077452 ] 00:09:32.981 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.981 [2024-04-25 20:04:30.754227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.981 [2024-04-25 20:04:30.852272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.240 [2024-04-25 20:04:31.101881] 'OCF_Core' volume operations registered 00:09:33.240 [2024-04-25 20:04:31.105357] 'OCF_Cache' volume operations registered 00:09:33.240 [2024-04-25 20:04:31.109293] 'OCF Composite' volume operations registered 00:09:33.240 [2024-04-25 20:04:31.112788] 'SPDK_block_device' volume operations registered 00:09:36.529 [2024-04-25 20:04:33.971591] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:36.529 [2024-04-25 20:04:33.971627] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:36.529 [2024-04-25 20:04:33.971656] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:36.529 [2024-04-25 20:04:33.973932] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:36.529 [2024-04-25 20:04:33.974111] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:36.529 [2024-04-25 20:04:33.974130] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:36.529 [2024-04-25 20:04:33.975065] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:36.529 00:09:36.529 [2024-04-25 20:04:33.975085] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:40.720 00:09:40.720 real 0m7.393s 00:09:40.720 user 0m6.272s 00:09:40.720 sys 0m0.376s 00:09:40.720 20:04:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:40.720 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:40.720 ************************************ 00:09:40.720 END TEST bdev_hello_world 00:09:40.720 ************************************ 00:09:40.720 20:04:38 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:09:40.720 20:04:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:40.720 20:04:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:40.720 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:40.720 ************************************ 00:09:40.720 START TEST bdev_bounds 00:09:40.720 ************************************ 00:09:40.720 20:04:38 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:09:40.720 20:04:38 -- bdev/blockdev.sh@288 -- # bdevio_pid=2078404 00:09:40.720 20:04:38 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:40.720 20:04:38 -- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:40.721 20:04:38 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 2078404' 00:09:40.721 Process bdevio pid: 2078404 00:09:40.721 20:04:38 -- bdev/blockdev.sh@291 -- # waitforlisten 2078404 00:09:40.721 20:04:38 -- common/autotest_common.sh@819 -- # '[' -z 2078404 ']' 00:09:40.721 20:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.721 20:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:40.721 20:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.721 20:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:40.721 20:04:38 -- common/autotest_common.sh@10 -- # set +x 00:09:40.721 [2024-04-25 20:04:38.109462] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:40.721 [2024-04-25 20:04:38.109543] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078404 ] 00:09:40.721 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.721 [2024-04-25 20:04:38.216799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:40.721 [2024-04-25 20:04:38.313664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.721 [2024-04-25 20:04:38.313739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.721 [2024-04-25 20:04:38.313744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.721 [2024-04-25 20:04:38.558146] 'OCF_Core' volume operations registered 00:09:40.721 [2024-04-25 20:04:38.561655] 'OCF_Cache' volume operations registered 00:09:40.721 [2024-04-25 20:04:38.565591] 'OCF Composite' volume operations registered 00:09:40.721 [2024-04-25 20:04:38.569099] 'SPDK_block_device' volume operations registered 00:09:44.912 20:04:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:44.912 20:04:41 -- common/autotest_common.sh@852 -- # return 0 00:09:44.912 20:04:41 -- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:44.912 I/O targets: 00:09:44.912 Nvme0n1p1: 3907016704 blocks of 512 bytes (1907723 MiB) 00:09:44.912 Nvme0n1p2: 3907016703 blocks of 512 bytes (1907723 MiB) 00:09:44.912 00:09:44.912 00:09:44.912 CUnit - A unit testing framework for C - Version 2.1-3 00:09:44.912 http://cunit.sourceforge.net/ 00:09:44.912 00:09:44.912 00:09:44.912 Suite: bdevio tests on: Nvme0n1p2 00:09:44.912 Test: blockdev write read block ...passed 00:09:44.912 Test: blockdev write zeroes read block ...passed 00:09:44.912 Test: blockdev write zeroes read no split ...passed 00:09:44.912 Test: blockdev write zeroes read split ...passed 00:09:44.912 Test: blockdev write zeroes read split partial ...passed 00:09:44.912 Test: blockdev reset ...[2024-04-25 20:04:42.133881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:09:44.912 [2024-04-25 20:04:42.136282] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.912 passed 00:09:44.912 Test: blockdev write read 8 blocks ...passed 00:09:44.912 Test: blockdev write read size > 128k ...passed 00:09:44.912 Test: blockdev write read invalid size ...passed 00:09:44.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.912 Test: blockdev write read max offset ...passed 00:09:44.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.912 Test: blockdev writev readv 8 blocks ...passed 00:09:44.912 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.912 Test: blockdev writev readv block ...passed 00:09:44.912 Test: blockdev writev readv size > 128k ...passed 00:09:44.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.912 Test: blockdev comparev and writev ...passed 00:09:44.912 Test: blockdev nvme passthru rw ...passed 00:09:44.912 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.912 Test: blockdev nvme admin passthru ...passed 00:09:44.912 Test: blockdev copy ...passed 00:09:44.912 Suite: bdevio tests on: Nvme0n1p1 00:09:44.912 Test: blockdev write read block ...passed 00:09:44.912 Test: blockdev write zeroes read block ...passed 00:09:44.912 Test: blockdev write zeroes read no split ...passed 00:09:44.912 Test: blockdev write zeroes read split ...passed 00:09:44.912 Test: blockdev write zeroes read split partial ...passed 00:09:44.912 Test: blockdev reset ...[2024-04-25 20:04:42.178522] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:09:44.912 [2024-04-25 20:04:42.180829] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:44.912 passed 00:09:44.912 Test: blockdev write read 8 blocks ...passed 00:09:44.912 Test: blockdev write read size > 128k ...passed 00:09:44.912 Test: blockdev write read invalid size ...passed 00:09:44.912 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:44.912 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:44.912 Test: blockdev write read max offset ...passed 00:09:44.912 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:44.912 Test: blockdev writev readv 8 blocks ...passed 00:09:44.912 Test: blockdev writev readv 30 x 1block ...passed 00:09:44.912 Test: blockdev writev readv block ...passed 00:09:44.912 Test: blockdev writev readv size > 128k ...passed 00:09:44.912 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:44.912 Test: blockdev comparev and writev ...passed 00:09:44.912 Test: blockdev nvme passthru rw ...passed 00:09:44.912 Test: blockdev nvme passthru vendor specific ...passed 00:09:44.912 Test: blockdev nvme admin passthru ...passed 00:09:44.912 Test: blockdev copy ...passed 00:09:44.912 00:09:44.912 Run Summary: Type Total Ran Passed Failed Inactive 00:09:44.912 suites 2 2 n/a 0 0 00:09:44.912 tests 46 46 46 0 0 00:09:44.912 asserts 260 260 260 0 n/a 00:09:44.912 00:09:44.912 Elapsed time = 0.235 seconds 00:09:44.912 0 00:09:44.912 20:04:42 -- bdev/blockdev.sh@293 -- # killprocess 2078404 00:09:44.912 20:04:42 -- common/autotest_common.sh@926 -- # '[' -z 2078404 ']' 00:09:44.912 20:04:42 -- common/autotest_common.sh@930 -- # kill -0 2078404 00:09:44.912 20:04:42 -- common/autotest_common.sh@931 -- # uname 00:09:44.912 20:04:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.912 20:04:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2078404 00:09:44.912 20:04:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.912 20:04:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.912 20:04:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2078404' 00:09:44.912 killing process with pid 2078404 00:09:44.912 20:04:42 -- common/autotest_common.sh@945 -- # kill 2078404 00:09:44.912 20:04:42 -- common/autotest_common.sh@950 -- # wait 2078404 00:09:49.104 20:04:46 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:49.104 00:09:49.104 real 0m8.288s 00:09:49.104 user 0m24.101s 00:09:49.104 sys 0m0.683s 00:09:49.104 20:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.104 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:49.104 ************************************ 00:09:49.104 END TEST bdev_bounds 00:09:49.104 ************************************ 00:09:49.104 20:04:46 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:09:49.104 20:04:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:49.104 20:04:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:49.104 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:49.104 ************************************ 00:09:49.104 START TEST bdev_nbd 00:09:49.104 ************************************ 00:09:49.104 20:04:46 -- common/autotest_common.sh@1104 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:09:49.104 20:04:46 -- bdev/blockdev.sh@298 -- # uname -s 00:09:49.104 20:04:46 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:49.104 20:04:46 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.104 20:04:46 -- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:09:49.104 20:04:46 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:09:49.104 20:04:46 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:49.104 20:04:46 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:09:49.104 20:04:46 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:49.104 20:04:46 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:49.104 20:04:46 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:49.104 20:04:46 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:09:49.104 20:04:46 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:49.104 20:04:46 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:49.104 20:04:46 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:09:49.104 20:04:46 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:49.104 20:04:46 -- bdev/blockdev.sh@316 -- # nbd_pid=2079616 00:09:49.104 20:04:46 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:49.104 20:04:46 -- bdev/blockdev.sh@318 -- # waitforlisten 2079616 /var/tmp/spdk-nbd.sock 00:09:49.104 20:04:46 -- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json '' 00:09:49.104 20:04:46 -- common/autotest_common.sh@819 -- # '[' -z 2079616 ']' 00:09:49.104 20:04:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:49.104 20:04:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.104 20:04:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:49.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:49.105 20:04:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.105 20:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:49.105 [2024-04-25 20:04:46.455706] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:09:49.105 [2024-04-25 20:04:46.455777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.105 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.105 [2024-04-25 20:04:46.562514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.105 [2024-04-25 20:04:46.662821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.105 [2024-04-25 20:04:46.900039] 'OCF_Core' volume operations registered 00:09:49.105 [2024-04-25 20:04:46.903535] 'OCF_Cache' volume operations registered 00:09:49.105 [2024-04-25 20:04:46.907558] 'OCF Composite' volume operations registered 00:09:49.105 [2024-04-25 20:04:46.911060] 'SPDK_block_device' volume operations registered 00:09:52.395 20:04:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:52.395 20:04:50 -- common/autotest_common.sh@852 -- # return 0 00:09:52.395 20:04:50 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@24 -- # local i 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:09:52.395 20:04:50 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:52.654 20:04:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:52.654 20:04:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:52.654 20:04:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:52.654 20:04:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:52.654 20:04:50 -- common/autotest_common.sh@857 -- # local i 00:09:52.654 20:04:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.654 20:04:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.654 20:04:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:52.654 20:04:50 -- common/autotest_common.sh@861 -- # break 00:09:52.654 20:04:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.654 20:04:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.654 20:04:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.654 1+0 records in 00:09:52.654 1+0 records out 00:09:52.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256095 s, 16.0 MB/s 00:09:52.654 20:04:50 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:52.913 20:04:50 -- common/autotest_common.sh@874 -- # size=4096 00:09:52.913 20:04:50 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:52.913 20:04:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:52.913 20:04:50 -- common/autotest_common.sh@877 -- # return 0 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:52.913 20:04:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:52.913 20:04:50 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:52.913 20:04:50 -- common/autotest_common.sh@857 -- # local i 00:09:52.913 20:04:50 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:52.913 20:04:50 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:52.913 20:04:50 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:52.913 20:04:50 -- common/autotest_common.sh@861 -- # break 00:09:52.913 20:04:50 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:52.913 20:04:50 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:52.913 20:04:50 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:53.173 1+0 records in 00:09:53.173 1+0 records out 00:09:53.173 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031887 s, 12.8 MB/s 00:09:53.173 20:04:50 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:53.173 20:04:50 -- common/autotest_common.sh@874 -- # size=4096 00:09:53.173 20:04:50 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:53.173 20:04:50 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:53.173 20:04:50 -- common/autotest_common.sh@877 -- # return 0 00:09:53.173 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:53.173 20:04:50 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:09:53.173 20:04:50 -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:53.433 { 00:09:53.433 "nbd_device": "/dev/nbd0", 00:09:53.433 "bdev_name": "Nvme0n1p1" 00:09:53.433 }, 00:09:53.433 { 00:09:53.433 "nbd_device": "/dev/nbd1", 00:09:53.433 "bdev_name": "Nvme0n1p2" 00:09:53.433 } 00:09:53.433 ]' 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:53.433 { 00:09:53.433 "nbd_device": "/dev/nbd0", 00:09:53.433 "bdev_name": "Nvme0n1p1" 00:09:53.433 }, 00:09:53.433 { 00:09:53.433 "nbd_device": "/dev/nbd1", 00:09:53.433 "bdev_name": "Nvme0n1p2" 00:09:53.433 } 00:09:53.433 ]' 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@51 -- # local i 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.433 20:04:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@41 -- # break 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.692 20:04:51 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@41 -- # break 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.952 20:04:51 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@65 -- # true 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@122 -- # count=0 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@127 -- # return 0 00:09:54.212 20:04:51 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@12 -- # local i 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.212 20:04:51 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:54.471 /dev/nbd0 00:09:54.471 20:04:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:54.471 20:04:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:54.471 20:04:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:54.471 20:04:52 -- common/autotest_common.sh@857 -- # local i 00:09:54.471 20:04:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.471 20:04:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.471 20:04:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:54.471 20:04:52 -- common/autotest_common.sh@861 -- # break 00:09:54.471 20:04:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.471 20:04:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.471 20:04:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.471 1+0 records in 00:09:54.471 1+0 records out 00:09:54.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283578 s, 14.4 MB/s 00:09:54.471 20:04:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:54.471 20:04:52 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.471 20:04:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:54.471 20:04:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.471 20:04:52 -- common/autotest_common.sh@877 -- # return 0 00:09:54.471 20:04:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.471 20:04:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.471 20:04:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:54.731 /dev/nbd1 00:09:54.731 20:04:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:54.731 20:04:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:54.731 20:04:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:09:54.731 20:04:52 -- common/autotest_common.sh@857 -- # local i 00:09:54.731 20:04:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:54.731 20:04:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:54.731 20:04:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:09:54.731 20:04:52 -- common/autotest_common.sh@861 -- # break 00:09:54.731 20:04:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:54.731 20:04:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:54.731 20:04:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:54.731 1+0 records in 00:09:54.731 1+0 records out 00:09:54.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031668 s, 12.9 MB/s 00:09:54.731 20:04:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:54.731 20:04:52 -- common/autotest_common.sh@874 -- # size=4096 00:09:54.731 20:04:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest 00:09:54.731 20:04:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:54.732 20:04:52 -- common/autotest_common.sh@877 -- # return 0 00:09:54.732 20:04:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:54.732 20:04:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:54.732 20:04:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.732 20:04:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.732 20:04:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:54.991 { 00:09:54.991 "nbd_device": "/dev/nbd0", 00:09:54.991 "bdev_name": "Nvme0n1p1" 00:09:54.991 }, 00:09:54.991 { 00:09:54.991 "nbd_device": "/dev/nbd1", 00:09:54.991 "bdev_name": "Nvme0n1p2" 00:09:54.991 } 00:09:54.991 ]' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:54.991 { 00:09:54.991 "nbd_device": "/dev/nbd0", 00:09:54.991 "bdev_name": "Nvme0n1p1" 00:09:54.991 }, 00:09:54.991 { 00:09:54.991 "nbd_device": "/dev/nbd1", 00:09:54.991 "bdev_name": "Nvme0n1p2" 00:09:54.991 } 00:09:54.991 ]' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:54.991 /dev/nbd1' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:54.991 /dev/nbd1' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@65 -- # count=2 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@95 -- # count=2 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:54.991 256+0 records in 00:09:54.991 256+0 records out 00:09:54.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01031 s, 102 MB/s 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:54.991 256+0 records in 00:09:54.991 256+0 records out 00:09:54.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0445844 s, 23.5 MB/s 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:54.991 256+0 records in 00:09:54.991 256+0 records out 00:09:54.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0399322 s, 26.3 MB/s 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:54.991 20:04:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@51 -- # local i 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.252 20:04:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.252 20:04:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@41 -- # break 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@41 -- # break 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.521 20:04:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@65 -- # true 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@65 -- # count=0 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@104 -- # count=0 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@109 -- # return 0 00:09:55.785 20:04:53 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:55.785 20:04:53 -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:56.044 malloc_lvol_verify 00:09:56.044 20:04:53 -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:56.303 2e5b61cd-627e-4622-acbf-4e752f9be464 00:09:56.303 20:04:54 -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:56.303 6b0309b5-eedf-4837-8133-4da63cac3c3d 00:09:56.303 20:04:54 -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:56.562 /dev/nbd0 00:09:56.562 20:04:54 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:56.562 mke2fs 1.46.5 (30-Dec-2021) 00:09:56.562 Discarding device blocks: 0/4096 done 00:09:56.563 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:56.563 00:09:56.563 Allocating group tables: 0/1 done 00:09:56.563 Writing inode tables: 0/1 done 00:09:56.563 Creating journal (1024 blocks): done 00:09:56.563 Writing superblocks and filesystem accounting information: 0/1 done 00:09:56.563 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@51 -- # local i 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:56.563 20:04:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@41 -- # break 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:56.822 20:04:54 -- bdev/nbd_common.sh@147 -- # return 0 00:09:56.822 20:04:54 -- bdev/blockdev.sh@324 -- # killprocess 2079616 00:09:56.822 20:04:54 -- common/autotest_common.sh@926 -- # '[' -z 2079616 ']' 00:09:56.822 20:04:54 -- common/autotest_common.sh@930 -- # kill -0 2079616 00:09:56.822 20:04:54 -- common/autotest_common.sh@931 -- # uname 00:09:56.822 20:04:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:56.822 20:04:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2079616 00:09:57.081 20:04:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:57.081 20:04:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:57.081 20:04:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2079616' 00:09:57.081 killing process with pid 2079616 00:09:57.081 20:04:54 -- common/autotest_common.sh@945 -- # kill 2079616 00:09:57.081 20:04:54 -- common/autotest_common.sh@950 -- # wait 2079616 00:10:01.274 20:04:58 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:10:01.274 00:10:01.274 real 0m12.408s 00:10:01.274 user 0m14.597s 00:10:01.274 sys 0m2.610s 00:10:01.274 20:04:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.274 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 ************************************ 00:10:01.274 END TEST bdev_nbd 00:10:01.274 ************************************ 00:10:01.274 20:04:58 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:10:01.274 20:04:58 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:10:01.274 20:04:58 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:10:01.274 20:04:58 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:01.274 skipping fio tests on NVMe due to multi-ns failures. 00:10:01.274 20:04:58 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:01.274 20:04:58 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:01.274 20:04:58 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:01.274 20:04:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.274 20:04:58 -- common/autotest_common.sh@10 -- # set +x 00:10:01.274 ************************************ 00:10:01.274 START TEST bdev_verify 00:10:01.274 ************************************ 00:10:01.274 20:04:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:01.274 [2024-04-25 20:04:58.899094] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:01.274 [2024-04-25 20:04:58.899164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081410 ] 00:10:01.274 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.274 [2024-04-25 20:04:59.004686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:01.274 [2024-04-25 20:04:59.102293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.274 [2024-04-25 20:04:59.102298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.533 [2024-04-25 20:04:59.350254] 'OCF_Core' volume operations registered 00:10:01.533 [2024-04-25 20:04:59.353745] 'OCF_Cache' volume operations registered 00:10:01.533 [2024-04-25 20:04:59.357672] 'OCF Composite' volume operations registered 00:10:01.533 [2024-04-25 20:04:59.361170] 'SPDK_block_device' volume operations registered 00:10:04.821 Running I/O for 5 seconds... 00:10:10.088 00:10:10.088 Latency(us) 00:10:10.088 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.088 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:10.088 Verification LBA range: start 0x0 length 0xe8e0580 00:10:10.088 Nvme0n1p1 : 5.02 7599.42 29.69 0.00 0.00 16798.91 2094.30 17324.30 00:10:10.088 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:10.088 Verification LBA range: start 0xe8e0580 length 0xe8e0580 00:10:10.088 Nvme0n1p1 : 5.02 7654.09 29.90 0.00 0.00 16676.27 3575.99 16982.37 00:10:10.088 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:10.088 Verification LBA range: start 0x0 length 0xe8e057f 00:10:10.088 Nvme0n1p2 : 5.03 7577.01 29.60 0.00 0.00 16824.47 2564.45 19375.86 00:10:10.088 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:10.088 Verification LBA range: start 0xe8e057f length 0xe8e057f 00:10:10.088 Nvme0n1p2 : 5.02 7647.65 29.87 0.00 0.00 16666.38 2820.90 17324.30 00:10:10.089 =================================================================================================================== 00:10:10.089 Total : 30478.17 119.06 0.00 0.00 16741.23 2094.30 19375.86 00:10:14.282 00:10:14.282 real 0m12.593s 00:10:14.282 user 0m23.586s 00:10:14.282 sys 0m0.413s 00:10:14.282 20:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.282 20:05:11 -- common/autotest_common.sh@10 -- # set +x 00:10:14.282 ************************************ 00:10:14.282 END TEST bdev_verify 00:10:14.282 ************************************ 00:10:14.282 20:05:11 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:14.282 20:05:11 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:10:14.282 20:05:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:14.282 20:05:11 -- common/autotest_common.sh@10 -- # set +x 00:10:14.282 ************************************ 00:10:14.282 START TEST bdev_verify_big_io 00:10:14.282 ************************************ 00:10:14.282 20:05:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:14.282 [2024-04-25 20:05:11.539054] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:14.282 [2024-04-25 20:05:11.539126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2083039 ] 00:10:14.282 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.282 [2024-04-25 20:05:11.643004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:14.282 [2024-04-25 20:05:11.742741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.282 [2024-04-25 20:05:11.742747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.282 [2024-04-25 20:05:11.987767] 'OCF_Core' volume operations registered 00:10:14.282 [2024-04-25 20:05:11.990973] 'OCF_Cache' volume operations registered 00:10:14.282 [2024-04-25 20:05:11.994583] 'OCF Composite' volume operations registered 00:10:14.282 [2024-04-25 20:05:11.997798] 'SPDK_block_device' volume operations registered 00:10:17.571 Running I/O for 5 seconds... 00:10:22.846 00:10:22.846 Latency(us) 00:10:22.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.846 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:22.846 Verification LBA range: start 0x0 length 0xe8e058 00:10:22.846 Nvme0n1p1 : 5.16 736.12 46.01 0.00 0.00 171768.30 3732.70 183272.85 00:10:22.846 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:22.846 Verification LBA range: start 0xe8e058 length 0xe8e058 00:10:22.846 Nvme0n1p1 : 5.17 751.88 46.99 0.00 0.00 168220.05 4074.63 189655.49 00:10:22.846 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:22.846 Verification LBA range: start 0x0 length 0xe8e057 00:10:22.846 Nvme0n1p2 : 5.17 735.42 45.96 0.00 0.00 169087.82 3632.97 182361.04 00:10:22.846 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:22.846 Verification LBA range: start 0xe8e057 length 0xe8e057 00:10:22.846 Nvme0n1p2 : 5.17 751.60 46.97 0.00 0.00 165688.54 3048.85 190567.29 00:10:22.846 =================================================================================================================== 00:10:22.846 Total : 2975.02 185.94 0.00 0.00 168671.74 3048.85 190567.29 00:10:27.036 00:10:27.036 real 0m12.639s 00:10:27.036 user 0m23.737s 00:10:27.036 sys 0m0.351s 00:10:27.036 20:05:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.036 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:10:27.036 ************************************ 00:10:27.036 END TEST bdev_verify_big_io 00:10:27.036 ************************************ 00:10:27.036 20:05:24 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.036 20:05:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:27.036 20:05:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.036 20:05:24 -- common/autotest_common.sh@10 -- # set +x 00:10:27.036 ************************************ 00:10:27.036 START TEST bdev_write_zeroes 00:10:27.036 ************************************ 00:10:27.036 20:05:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:27.036 [2024-04-25 20:05:24.216394] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:27.036 [2024-04-25 20:05:24.216463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2084823 ] 00:10:27.036 EAL: No free 2048 kB hugepages reported on node 1 00:10:27.036 [2024-04-25 20:05:24.321176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:27.036 [2024-04-25 20:05:24.414498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.037 [2024-04-25 20:05:24.661275] 'OCF_Core' volume operations registered 00:10:27.037 [2024-04-25 20:05:24.664757] 'OCF_Cache' volume operations registered 00:10:27.037 [2024-04-25 20:05:24.668695] 'OCF Composite' volume operations registered 00:10:27.037 [2024-04-25 20:05:24.672172] 'SPDK_block_device' volume operations registered 00:10:30.323 Running I/O for 1 seconds... 00:10:30.917 00:10:30.917 Latency(us) 00:10:30.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.917 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:30.917 Nvme0n1p1 : 1.01 23474.34 91.70 0.00 0.00 5439.42 3219.81 6354.14 00:10:30.917 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:30.917 Nvme0n1p2 : 1.01 23428.40 91.52 0.00 0.00 5440.31 3006.11 6325.65 00:10:30.917 =================================================================================================================== 00:10:30.917 Total : 46902.74 183.21 0.00 0.00 5439.86 3006.11 6354.14 00:10:35.134 00:10:35.134 real 0m8.397s 00:10:35.134 user 0m7.286s 00:10:35.134 sys 0m0.351s 00:10:35.134 20:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.134 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:10:35.134 ************************************ 00:10:35.134 END TEST bdev_write_zeroes 00:10:35.134 ************************************ 00:10:35.134 20:05:32 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:35.134 20:05:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:35.134 20:05:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.134 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:10:35.134 ************************************ 00:10:35.134 START TEST bdev_json_nonenclosed 00:10:35.134 ************************************ 00:10:35.134 20:05:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:35.134 [2024-04-25 20:05:32.667808] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:35.134 [2024-04-25 20:05:32.667880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085941 ] 00:10:35.134 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.134 [2024-04-25 20:05:32.774180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.134 [2024-04-25 20:05:32.870941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.134 [2024-04-25 20:05:32.871059] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:35.134 [2024-04-25 20:05:32.871081] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.134 00:10:35.134 real 0m0.366s 00:10:35.134 user 0m0.225s 00:10:35.134 sys 0m0.139s 00:10:35.134 20:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.134 20:05:32 -- common/autotest_common.sh@10 -- # set +x 00:10:35.134 ************************************ 00:10:35.134 END TEST bdev_json_nonenclosed 00:10:35.134 ************************************ 00:10:35.134 20:05:33 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:35.134 20:05:33 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:10:35.134 20:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.134 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:35.134 ************************************ 00:10:35.134 START TEST bdev_json_nonarray 00:10:35.134 ************************************ 00:10:35.134 20:05:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:35.400 [2024-04-25 20:05:33.081414] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:35.400 [2024-04-25 20:05:33.081490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085966 ] 00:10:35.400 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.400 [2024-04-25 20:05:33.187738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.400 [2024-04-25 20:05:33.290617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.400 [2024-04-25 20:05:33.290746] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:35.400 [2024-04-25 20:05:33.290769] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:35.657 00:10:35.657 real 0m0.373s 00:10:35.657 user 0m0.234s 00:10:35.658 sys 0m0.137s 00:10:35.658 20:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.658 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 ************************************ 00:10:35.658 END TEST bdev_json_nonarray 00:10:35.658 ************************************ 00:10:35.658 20:05:33 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:10:35.658 20:05:33 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:10:35.658 20:05:33 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:35.658 20:05:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:35.658 20:05:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.658 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 ************************************ 00:10:35.658 START TEST bdev_gpt_uuid 00:10:35.658 ************************************ 00:10:35.658 20:05:33 -- common/autotest_common.sh@1104 -- # bdev_gpt_uuid 00:10:35.658 20:05:33 -- bdev/blockdev.sh@612 -- # local bdev 00:10:35.658 20:05:33 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:10:35.658 20:05:33 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2086117 00:10:35.658 20:05:33 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:35.658 20:05:33 -- bdev/blockdev.sh@47 -- # waitforlisten 2086117 00:10:35.658 20:05:33 -- common/autotest_common.sh@819 -- # '[' -z 2086117 ']' 00:10:35.658 20:05:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.658 20:05:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:35.658 20:05:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.658 20:05:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:35.658 20:05:33 -- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' '' 00:10:35.658 20:05:33 -- common/autotest_common.sh@10 -- # set +x 00:10:35.658 [2024-04-25 20:05:33.512485] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:35.658 [2024-04-25 20:05:33.512557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086117 ] 00:10:35.658 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.919 [2024-04-25 20:05:33.618070] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.919 [2024-04-25 20:05:33.718456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:35.919 [2024-04-25 20:05:33.718604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.179 [2024-04-25 20:05:33.909736] 'OCF_Core' volume operations registered 00:10:36.179 [2024-04-25 20:05:33.913297] 'OCF_Cache' volume operations registered 00:10:36.179 [2024-04-25 20:05:33.917139] 'OCF Composite' volume operations registered 00:10:36.179 [2024-04-25 20:05:33.920560] 'SPDK_block_device' volume operations registered 00:10:36.747 20:05:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:36.747 20:05:34 -- common/autotest_common.sh@852 -- # return 0 00:10:36.747 20:05:34 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:10:36.747 20:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:36.747 20:05:34 -- common/autotest_common.sh@10 -- # set +x 00:10:40.039 Some configs were skipped because the RPC state that can call them passed over. 00:10:40.039 20:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:10:40.039 20:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.039 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:10:40.039 20:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:40.039 20:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.039 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:10:40.039 20:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@619 -- # bdev='[ 00:10:40.039 { 00:10:40.039 "name": "Nvme0n1p1", 00:10:40.039 "aliases": [ 00:10:40.039 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:40.039 ], 00:10:40.039 "product_name": "GPT Disk", 00:10:40.039 "block_size": 512, 00:10:40.039 "num_blocks": 3907016704, 00:10:40.039 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:40.039 "assigned_rate_limits": { 00:10:40.039 "rw_ios_per_sec": 0, 00:10:40.039 "rw_mbytes_per_sec": 0, 00:10:40.039 "r_mbytes_per_sec": 0, 00:10:40.039 "w_mbytes_per_sec": 0 00:10:40.039 }, 00:10:40.039 "claimed": false, 00:10:40.039 "zoned": false, 00:10:40.039 "supported_io_types": { 00:10:40.039 "read": true, 00:10:40.039 "write": true, 00:10:40.039 "unmap": true, 00:10:40.039 "write_zeroes": true, 00:10:40.039 "flush": true, 00:10:40.039 "reset": true, 00:10:40.039 "compare": false, 00:10:40.039 "compare_and_write": false, 00:10:40.039 "abort": true, 00:10:40.039 "nvme_admin": false, 00:10:40.039 "nvme_io": false 00:10:40.039 }, 00:10:40.039 "driver_specific": { 00:10:40.039 "gpt": { 00:10:40.039 "base_bdev": "Nvme0n1", 00:10:40.039 "offset_blocks": 2048, 00:10:40.039 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:40.039 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:40.039 "partition_name": "SPDK_TEST_first" 00:10:40.039 } 00:10:40.039 } 00:10:40.039 } 00:10:40.039 ]' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@620 -- # jq -r length 00:10:40.039 20:05:37 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:40.039 20:05:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:40.039 20:05:37 -- common/autotest_common.sh@10 -- # set +x 00:10:40.039 20:05:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@624 -- # bdev='[ 00:10:40.039 { 00:10:40.039 "name": "Nvme0n1p2", 00:10:40.039 "aliases": [ 00:10:40.039 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:40.039 ], 00:10:40.039 "product_name": "GPT Disk", 00:10:40.039 "block_size": 512, 00:10:40.039 "num_blocks": 3907016703, 00:10:40.039 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:40.039 "assigned_rate_limits": { 00:10:40.039 "rw_ios_per_sec": 0, 00:10:40.039 "rw_mbytes_per_sec": 0, 00:10:40.039 "r_mbytes_per_sec": 0, 00:10:40.039 "w_mbytes_per_sec": 0 00:10:40.039 }, 00:10:40.039 "claimed": false, 00:10:40.039 "zoned": false, 00:10:40.039 "supported_io_types": { 00:10:40.039 "read": true, 00:10:40.039 "write": true, 00:10:40.039 "unmap": true, 00:10:40.039 "write_zeroes": true, 00:10:40.039 "flush": true, 00:10:40.039 "reset": true, 00:10:40.039 "compare": false, 00:10:40.039 "compare_and_write": false, 00:10:40.039 "abort": true, 00:10:40.039 "nvme_admin": false, 00:10:40.039 "nvme_io": false 00:10:40.039 }, 00:10:40.039 "driver_specific": { 00:10:40.039 "gpt": { 00:10:40.039 "base_bdev": "Nvme0n1", 00:10:40.039 "offset_blocks": 3907018752, 00:10:40.039 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:40.039 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:40.039 "partition_name": "SPDK_TEST_second" 00:10:40.039 } 00:10:40.039 } 00:10:40.039 } 00:10:40.039 ]' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@625 -- # jq -r length 00:10:40.039 20:05:37 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:40.039 20:05:37 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:40.039 20:05:37 -- bdev/blockdev.sh@629 -- # killprocess 2086117 00:10:40.040 20:05:37 -- common/autotest_common.sh@926 -- # '[' -z 2086117 ']' 00:10:40.040 20:05:37 -- common/autotest_common.sh@930 -- # kill -0 2086117 00:10:40.040 20:05:37 -- common/autotest_common.sh@931 -- # uname 00:10:40.040 20:05:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:40.040 20:05:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2086117 00:10:40.040 20:05:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:40.040 20:05:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:40.040 20:05:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2086117' 00:10:40.040 killing process with pid 2086117 00:10:40.040 20:05:37 -- common/autotest_common.sh@945 -- # kill 2086117 00:10:40.040 20:05:37 -- common/autotest_common.sh@950 -- # wait 2086117 00:10:44.234 00:10:44.234 real 0m8.478s 00:10:44.234 user 0m7.903s 00:10:44.234 sys 0m0.602s 00:10:44.234 20:05:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.234 20:05:41 -- common/autotest_common.sh@10 -- # set +x 00:10:44.234 ************************************ 00:10:44.234 END TEST bdev_gpt_uuid 00:10:44.234 ************************************ 00:10:44.234 20:05:41 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:10:44.234 20:05:41 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:10:44.234 20:05:41 -- bdev/blockdev.sh@809 -- # cleanup 00:10:44.234 20:05:41 -- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile 00:10:44.234 20:05:41 -- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 00:10:44.234 20:05:41 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:10:44.234 20:05:41 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:10:44.234 20:05:41 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:10:44.234 20:05:41 -- bdev/blockdev.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:10:47.519 Waiting for block devices as requested 00:10:47.520 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:10:47.520 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:47.520 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:47.520 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:47.779 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:47.779 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:47.779 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:48.039 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:48.039 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:48.039 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:10:48.039 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:10:48.298 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:10:48.298 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:10:48.298 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:10:48.558 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:10:48.558 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:10:48.558 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:10:48.818 20:05:46 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:10:48.818 20:05:46 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:10:49.078 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:10:49.078 /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54 00:10:49.078 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:49.078 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:10:49.078 20:05:46 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:10:49.078 00:10:49.078 real 1m37.756s 00:10:49.078 user 2m12.346s 00:10:49.078 sys 0m13.990s 00:10:49.078 20:05:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.078 20:05:46 -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 ************************************ 00:10:49.078 END TEST blockdev_nvme_gpt 00:10:49.078 ************************************ 00:10:49.078 20:05:46 -- spdk/autotest.sh@222 -- # run_test nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:10:49.078 20:05:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:10:49.078 20:05:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.078 20:05:46 -- common/autotest_common.sh@10 -- # set +x 00:10:49.078 ************************************ 00:10:49.078 START TEST nvme 00:10:49.078 ************************************ 00:10:49.078 20:05:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh 00:10:49.078 * Looking for test storage... 00:10:49.078 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:10:49.078 20:05:46 -- nvme/nvme.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:10:52.369 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:10:52.369 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:10:52.370 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:10:52.370 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:10:52.370 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:10:52.370 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:10:55.661 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:10:55.661 20:05:53 -- nvme/nvme.sh@79 -- # uname 00:10:55.661 20:05:53 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:55.661 20:05:53 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:55.661 20:05:53 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:55.661 20:05:53 -- common/autotest_common.sh@1058 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:55.661 20:05:53 -- common/autotest_common.sh@1044 -- # _randomize_va_space=2 00:10:55.661 20:05:53 -- common/autotest_common.sh@1045 -- # echo 0 00:10:55.661 20:05:53 -- common/autotest_common.sh@1047 -- # stubpid=2090038 00:10:55.661 20:05:53 -- common/autotest_common.sh@1048 -- # echo Waiting for stub to ready for secondary processes... 00:10:55.661 Waiting for stub to ready for secondary processes... 00:10:55.661 20:05:53 -- common/autotest_common.sh@1046 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:55.661 20:05:53 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:55.661 20:05:53 -- common/autotest_common.sh@1051 -- # [[ -e /proc/2090038 ]] 00:10:55.661 20:05:53 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:55.661 [2024-04-25 20:05:53.309885] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:10:55.661 [2024-04-25 20:05:53.309952] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.661 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.599 20:05:54 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:56.599 20:05:54 -- common/autotest_common.sh@1051 -- # [[ -e /proc/2090038 ]] 00:10:56.599 20:05:54 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:56.599 [2024-04-25 20:05:54.345463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:56.599 [2024-04-25 20:05:54.432875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.599 [2024-04-25 20:05:54.432976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.599 [2024-04-25 20:05:54.432977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.536 20:05:55 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:57.536 20:05:55 -- common/autotest_common.sh@1051 -- # [[ -e /proc/2090038 ]] 00:10:57.536 20:05:55 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:58.473 20:05:56 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:58.473 20:05:56 -- common/autotest_common.sh@1051 -- # [[ -e /proc/2090038 ]] 00:10:58.473 20:05:56 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:59.410 20:05:57 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:59.410 20:05:57 -- common/autotest_common.sh@1051 -- # [[ -e /proc/2090038 ]] 00:10:59.410 20:05:57 -- common/autotest_common.sh@1052 -- # sleep 1s 00:10:59.669 [2024-04-25 20:05:57.441341] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:59.669 [2024-04-25 20:05:57.450391] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:59.669 [2024-04-25 20:05:57.450513] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:00.693 20:05:58 -- common/autotest_common.sh@1049 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:00.693 20:05:58 -- common/autotest_common.sh@1054 -- # echo done. 00:11:00.693 done. 00:11:00.693 20:05:58 -- nvme/nvme.sh@84 -- # run_test nvme_reset /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:00.693 20:05:58 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:11:00.693 20:05:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:00.693 20:05:58 -- common/autotest_common.sh@10 -- # set +x 00:11:00.693 ************************************ 00:11:00.693 START TEST nvme_reset 00:11:00.693 ************************************ 00:11:00.693 20:05:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:00.952 [2024-04-25 20:05:58.639836] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.639922] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.639944] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.639962] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.639979] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.639997] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640015] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640032] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640050] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640067] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640085] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640102] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640119] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640136] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640153] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640172] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640189] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640207] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640224] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640243] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640262] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640280] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640302] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640321] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640341] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640359] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640377] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640397] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640415] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640434] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640453] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640471] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640490] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640508] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640529] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640546] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640564] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640581] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640599] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640616] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640640] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640658] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640674] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640692] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640710] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640727] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640744] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640761] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640777] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640796] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640813] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640830] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640848] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640864] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640881] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640908] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640926] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640944] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640961] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640979] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.640996] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.641014] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.641031] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.952 [2024-04-25 20:05:58.641049] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655768] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655842] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655863] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655881] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655899] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655917] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655935] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.225 [2024-04-25 20:06:03.655952] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.655970] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.655988] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656005] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656022] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656040] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656058] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656076] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656093] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656112] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656130] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656148] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656165] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656183] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656202] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656220] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656237] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656261] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656280] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656298] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656316] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656335] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656352] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656370] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656387] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656405] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656422] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656442] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656459] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656477] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656494] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656512] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656530] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656547] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656564] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656581] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656598] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656615] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656638] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656656] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656674] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656691] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656708] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656725] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656743] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656760] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656777] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656793] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656811] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656828] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656847] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656864] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656882] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656899] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656916] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656933] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.226 [2024-04-25 20:06:03.656951] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.670896] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.670951] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.670971] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.670989] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671006] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671023] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671040] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671057] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671075] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671092] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671109] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671126] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671143] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671161] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671180] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671197] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671215] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.499 [2024-04-25 20:06:08.671232] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671249] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671267] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671284] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671301] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671319] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671336] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671354] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671371] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671387] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671410] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671428] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671445] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671461] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671479] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671495] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671513] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671535] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671552] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671569] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671586] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671602] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671619] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671640] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671657] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671673] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671690] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671707] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671723] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671740] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671757] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671773] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671790] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671806] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671823] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671839] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671856] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671873] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671890] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671906] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671922] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671939] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671956] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671974] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.671991] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.672008] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.500 [2024-04-25 20:06:08.672024] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.775 Initializing NVMe Controllers 00:11:16.775 Associating INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) with lcore 0 00:11:16.775 Initialization complete. Launching workers. 00:11:16.776 Starting thread on core 0 00:11:16.776 ======================================================== 00:11:16.776 631872 IO completed successfully 00:11:16.776 64 IO completed with error 00:11:16.776 -------------------------------------------------------- 00:11:16.776 631936 IO completed total 00:11:16.776 631936 IO submitted 00:11:16.776 Starting thread on core 0 00:11:16.776 ======================================================== 00:11:16.776 631488 IO completed successfully 00:11:16.776 64 IO completed with error 00:11:16.776 -------------------------------------------------------- 00:11:16.776 631552 IO completed total 00:11:16.776 631552 IO submitted 00:11:16.776 Starting thread on core 0 00:11:16.776 ======================================================== 00:11:16.776 631488 IO completed successfully 00:11:16.776 64 IO completed with error 00:11:16.776 -------------------------------------------------------- 00:11:16.776 631552 IO completed total 00:11:16.776 631552 IO submitted 00:11:16.776 00:11:16.776 real 0m15.392s 00:11:16.776 user 0m15.078s 00:11:16.776 sys 0m0.187s 00:11:16.776 20:06:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.776 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:16.776 ************************************ 00:11:16.776 END TEST nvme_reset 00:11:16.776 ************************************ 00:11:16.776 20:06:13 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:16.776 20:06:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:16.776 20:06:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.776 20:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:16.776 ************************************ 00:11:16.776 START TEST nvme_identify 00:11:16.776 ************************************ 00:11:16.776 20:06:13 -- common/autotest_common.sh@1104 -- # nvme_identify 00:11:16.776 20:06:13 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:16.776 20:06:13 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:16.776 20:06:13 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:16.776 20:06:13 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:16.776 20:06:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:16.776 20:06:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:16.776 20:06:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:16.776 20:06:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:16.776 20:06:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:16.776 20:06:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:16.776 20:06:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:11:16.776 20:06:13 -- nvme/nvme.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -i 0 00:11:16.776 ===================================================== 00:11:16.776 NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:16.776 ===================================================== 00:11:16.776 Controller Capabilities/Features 00:11:16.776 ================================ 00:11:16.776 Vendor ID: 8086 00:11:16.776 Subsystem Vendor ID: 8086 00:11:16.776 Serial Number: BTLJ83030AK84P0DGN 00:11:16.776 Model Number: INTEL SSDPE2KX040T8 00:11:16.776 Firmware Version: VDV10184 00:11:16.776 Recommended Arb Burst: 0 00:11:16.776 IEEE OUI Identifier: e4 d2 5c 00:11:16.776 Multi-path I/O 00:11:16.776 May have multiple subsystem ports: No 00:11:16.776 May have multiple controllers: No 00:11:16.776 Associated with SR-IOV VF: No 00:11:16.776 Max Data Transfer Size: 131072 00:11:16.776 Max Number of Namespaces: 128 00:11:16.776 Max Number of I/O Queues: 128 00:11:16.776 NVMe Specification Version (VS): 1.2 00:11:16.776 NVMe Specification Version (Identify): 1.2 00:11:16.776 Maximum Queue Entries: 4096 00:11:16.776 Contiguous Queues Required: Yes 00:11:16.776 Arbitration Mechanisms Supported 00:11:16.776 Weighted Round Robin: Supported 00:11:16.776 Vendor Specific: Not Supported 00:11:16.776 Reset Timeout: 60000 ms 00:11:16.776 Doorbell Stride: 4 bytes 00:11:16.776 NVM Subsystem Reset: Not Supported 00:11:16.776 Command Sets Supported 00:11:16.776 NVM Command Set: Supported 00:11:16.776 Boot Partition: Not Supported 00:11:16.776 Memory Page Size Minimum: 4096 bytes 00:11:16.776 Memory Page Size Maximum: 4096 bytes 00:11:16.776 Persistent Memory Region: Not Supported 00:11:16.776 Optional Asynchronous Events Supported 00:11:16.776 Namespace Attribute Notices: Not Supported 00:11:16.776 Firmware Activation Notices: Supported 00:11:16.776 ANA Change Notices: Not Supported 00:11:16.776 PLE Aggregate Log Change Notices: Not Supported 00:11:16.776 LBA Status Info Alert Notices: Not Supported 00:11:16.776 EGE Aggregate Log Change Notices: Not Supported 00:11:16.776 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.776 Zone Descriptor Change Notices: Not Supported 00:11:16.776 Discovery Log Change Notices: Not Supported 00:11:16.776 Controller Attributes 00:11:16.776 128-bit Host Identifier: Not Supported 00:11:16.776 Non-Operational Permissive Mode: Not Supported 00:11:16.776 NVM Sets: Not Supported 00:11:16.776 Read Recovery Levels: Not Supported 00:11:16.776 Endurance Groups: Not Supported 00:11:16.776 Predictable Latency Mode: Not Supported 00:11:16.776 Traffic Based Keep ALive: Not Supported 00:11:16.776 Namespace Granularity: Not Supported 00:11:16.776 SQ Associations: Not Supported 00:11:16.776 UUID List: Not Supported 00:11:16.776 Multi-Domain Subsystem: Not Supported 00:11:16.776 Fixed Capacity Management: Not Supported 00:11:16.776 Variable Capacity Management: Not Supported 00:11:16.776 Delete Endurance Group: Not Supported 00:11:16.776 Delete NVM Set: Not Supported 00:11:16.776 Extended LBA Formats Supported: Not Supported 00:11:16.776 Flexible Data Placement Supported: Not Supported 00:11:16.776 00:11:16.776 Controller Memory Buffer Support 00:11:16.776 ================================ 00:11:16.776 Supported: No 00:11:16.776 00:11:16.776 Persistent Memory Region Support 00:11:16.776 ================================ 00:11:16.776 Supported: No 00:11:16.776 00:11:16.776 Admin Command Set Attributes 00:11:16.776 ============================ 00:11:16.776 Security Send/Receive: Not Supported 00:11:16.776 Format NVM: Supported 00:11:16.776 Firmware Activate/Download: Supported 00:11:16.776 Namespace Management: Supported 00:11:16.776 Device Self-Test: Not Supported 00:11:16.776 Directives: Not Supported 00:11:16.776 NVMe-MI: Not Supported 00:11:16.776 Virtualization Management: Not Supported 00:11:16.776 Doorbell Buffer Config: Not Supported 00:11:16.776 Get LBA Status Capability: Not Supported 00:11:16.776 Command & Feature Lockdown Capability: Not Supported 00:11:16.776 Abort Command Limit: 4 00:11:16.776 Async Event Request Limit: 4 00:11:16.776 Number of Firmware Slots: 4 00:11:16.776 Firmware Slot 1 Read-Only: No 00:11:16.776 Firmware Activation Without Reset: Yes 00:11:16.776 Multiple Update Detection Support: No 00:11:16.776 Firmware Update Granularity: No Information Provided 00:11:16.776 Per-Namespace SMART Log: No 00:11:16.776 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.776 Subsystem NQN: 00:11:16.776 Command Effects Log Page: Supported 00:11:16.776 Get Log Page Extended Data: Supported 00:11:16.776 Telemetry Log Pages: Supported 00:11:16.776 Persistent Event Log Pages: Not Supported 00:11:16.776 Supported Log Pages Log Page: May Support 00:11:16.776 Commands Supported & Effects Log Page: Not Supported 00:11:16.776 Feature Identifiers & Effects Log Page:May Support 00:11:16.776 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.776 Data Area 4 for Telemetry Log: Not Supported 00:11:16.776 Error Log Page Entries Supported: 64 00:11:16.776 Keep Alive: Not Supported 00:11:16.776 00:11:16.776 NVM Command Set Attributes 00:11:16.776 ========================== 00:11:16.776 Submission Queue Entry Size 00:11:16.776 Max: 64 00:11:16.776 Min: 64 00:11:16.776 Completion Queue Entry Size 00:11:16.776 Max: 16 00:11:16.776 Min: 16 00:11:16.776 Number of Namespaces: 128 00:11:16.776 Compare Command: Not Supported 00:11:16.776 Write Uncorrectable Command: Supported 00:11:16.776 Dataset Management Command: Supported 00:11:16.776 Write Zeroes Command: Not Supported 00:11:16.776 Set Features Save Field: Not Supported 00:11:16.776 Reservations: Not Supported 00:11:16.776 Timestamp: Not Supported 00:11:16.776 Copy: Not Supported 00:11:16.776 Volatile Write Cache: Not Present 00:11:16.776 Atomic Write Unit (Normal): 1 00:11:16.776 Atomic Write Unit (PFail): 1 00:11:16.776 Atomic Compare & Write Unit: 1 00:11:16.777 Fused Compare & Write: Not Supported 00:11:16.777 Scatter-Gather List 00:11:16.777 SGL Command Set: Not Supported 00:11:16.777 SGL Keyed: Not Supported 00:11:16.777 SGL Bit Bucket Descriptor: Not Supported 00:11:16.777 SGL Metadata Pointer: Not Supported 00:11:16.777 Oversized SGL: Not Supported 00:11:16.777 SGL Metadata Address: Not Supported 00:11:16.777 SGL Offset: Not Supported 00:11:16.777 Transport SGL Data Block: Not Supported 00:11:16.777 Replay Protected Memory Block: Not Supported 00:11:16.777 00:11:16.777 Firmware Slot Information 00:11:16.777 ========================= 00:11:16.777 Active slot: 1 00:11:16.777 Slot 1 Firmware Revision: VDV10184 00:11:16.777 00:11:16.777 00:11:16.777 Commands Supported and Effects 00:11:16.777 ============================== 00:11:16.777 Admin Commands 00:11:16.777 -------------- 00:11:16.777 Delete I/O Submission Queue (00h): Supported 00:11:16.777 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:11:16.777 Get Log Page (02h): Supported 00:11:16.777 Delete I/O Completion Queue (04h): Supported 00:11:16.777 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:11:16.777 Identify (06h): Supported 00:11:16.777 Abort (08h): Supported 00:11:16.777 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:11:16.777 Get Features (0Ah): Supported 00:11:16.777 Asynchronous Event Request (0Ch): Supported 00:11:16.777 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:11:16.777 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:11:16.777 Firmware Image Download (11h): Supported 00:11:16.777 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:11:16.777 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:11:16.777 Vendor specific (C8h): Supported 00:11:16.777 Vendor specific (D2h): Supported 00:11:16.777 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:11:16.777 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:11:16.777 I/O Commands 00:11:16.777 ------------ 00:11:16.777 Flush (00h): Supported LBA-Change 00:11:16.777 Write (01h): Supported LBA-Change 00:11:16.777 Read (02h): Supported 00:11:16.777 Write Uncorrectable (04h): Supported LBA-Change 00:11:16.777 Dataset Management (09h): Supported LBA-Change 00:11:16.777 00:11:16.777 Error Log 00:11:16.777 ========= 00:11:16.777 Entry: 0 00:11:16.777 Error Count: 0x4c8b 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 1 00:11:16.777 Error Count: 0x4c8a 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 2 00:11:16.777 Error Count: 0x4c89 00:11:16.777 Submission Queue Id: 0x0 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 3 00:11:16.777 Error Count: 0x4c88 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 4 00:11:16.777 Error Count: 0x4c87 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 5 00:11:16.777 Error Count: 0x4c86 00:11:16.777 Submission Queue Id: 0x0 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 6 00:11:16.777 Error Count: 0x4c85 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 7 00:11:16.777 Error Count: 0x4c84 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 8 00:11:16.777 Error Count: 0x4c83 00:11:16.777 Submission Queue Id: 0x0 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 9 00:11:16.777 Error Count: 0x4c82 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 10 00:11:16.777 Error Count: 0x4c81 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 11 00:11:16.777 Error Count: 0x4c80 00:11:16.777 Submission Queue Id: 0x0 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 12 00:11:16.777 Error Count: 0x4c7f 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.777 Namespace: 0xffffffff 00:11:16.777 Vendor Log Page: 0x0 00:11:16.777 ----------- 00:11:16.777 Entry: 13 00:11:16.777 Error Count: 0x4c7e 00:11:16.777 Submission Queue Id: 0x2 00:11:16.777 Command Id: 0xffff 00:11:16.777 Phase Bit: 0 00:11:16.777 Status Code: 0x6 00:11:16.777 Status Code Type: 0x0 00:11:16.777 Do Not Retry: 1 00:11:16.777 Error Location: 0xffff 00:11:16.777 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 14 00:11:16.778 Error Count: 0x4c7d 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 15 00:11:16.778 Error Count: 0x4c7c 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 16 00:11:16.778 Error Count: 0x4c7b 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 17 00:11:16.778 Error Count: 0x4c7a 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 18 00:11:16.778 Error Count: 0x4c79 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 19 00:11:16.778 Error Count: 0x4c78 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 20 00:11:16.778 Error Count: 0x4c77 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 21 00:11:16.778 Error Count: 0x4c76 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 22 00:11:16.778 Error Count: 0x4c75 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 23 00:11:16.778 Error Count: 0x4c74 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 24 00:11:16.778 Error Count: 0x4c73 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 25 00:11:16.778 Error Count: 0x4c72 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 26 00:11:16.778 Error Count: 0x4c71 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 27 00:11:16.778 Error Count: 0x4c70 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 28 00:11:16.778 Error Count: 0x4c6f 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 29 00:11:16.778 Error Count: 0x4c6e 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 30 00:11:16.778 Error Count: 0x4c6d 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 31 00:11:16.778 Error Count: 0x4c6c 00:11:16.778 Submission Queue Id: 0x2 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.778 Vendor Log Page: 0x0 00:11:16.778 ----------- 00:11:16.778 Entry: 32 00:11:16.778 Error Count: 0x4c6b 00:11:16.778 Submission Queue Id: 0x0 00:11:16.778 Command Id: 0xffff 00:11:16.778 Phase Bit: 0 00:11:16.778 Status Code: 0x6 00:11:16.778 Status Code Type: 0x0 00:11:16.778 Do Not Retry: 1 00:11:16.778 Error Location: 0xffff 00:11:16.778 LBA: 0x0 00:11:16.778 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 33 00:11:16.779 Error Count: 0x4c6a 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 34 00:11:16.779 Error Count: 0x4c69 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 35 00:11:16.779 Error Count: 0x4c68 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 36 00:11:16.779 Error Count: 0x4c67 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 37 00:11:16.779 Error Count: 0x4c66 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 38 00:11:16.779 Error Count: 0x4c65 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 39 00:11:16.779 Error Count: 0x4c64 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 40 00:11:16.779 Error Count: 0x4c63 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 41 00:11:16.779 Error Count: 0x4c62 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 42 00:11:16.779 Error Count: 0x4c61 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 43 00:11:16.779 Error Count: 0x4c60 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 44 00:11:16.779 Error Count: 0x4c5f 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 45 00:11:16.779 Error Count: 0x4c5e 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 46 00:11:16.779 Error Count: 0x4c5d 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 47 00:11:16.779 Error Count: 0x4c5c 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 48 00:11:16.779 Error Count: 0x4c5b 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 49 00:11:16.779 Error Count: 0x4c5a 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 50 00:11:16.779 Error Count: 0x4c59 00:11:16.779 Submission Queue Id: 0x0 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 51 00:11:16.779 Error Count: 0x4c58 00:11:16.779 Submission Queue Id: 0x2 00:11:16.779 Command Id: 0xffff 00:11:16.779 Phase Bit: 0 00:11:16.779 Status Code: 0x6 00:11:16.779 Status Code Type: 0x0 00:11:16.779 Do Not Retry: 1 00:11:16.779 Error Location: 0xffff 00:11:16.779 LBA: 0x0 00:11:16.779 Namespace: 0xffffffff 00:11:16.779 Vendor Log Page: 0x0 00:11:16.779 ----------- 00:11:16.779 Entry: 52 00:11:16.779 Error Count: 0x4c57 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 53 00:11:16.780 Error Count: 0x4c56 00:11:16.780 Submission Queue Id: 0x0 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 54 00:11:16.780 Error Count: 0x4c55 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 55 00:11:16.780 Error Count: 0x4c54 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 56 00:11:16.780 Error Count: 0x4c53 00:11:16.780 Submission Queue Id: 0x0 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 57 00:11:16.780 Error Count: 0x4c52 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 58 00:11:16.780 Error Count: 0x4c51 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 59 00:11:16.780 Error Count: 0x4c50 00:11:16.780 Submission Queue Id: 0x0 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 60 00:11:16.780 Error Count: 0x4c4f 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 61 00:11:16.780 Error Count: 0x4c4e 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 62 00:11:16.780 Error Count: 0x4c4d 00:11:16.780 Submission Queue Id: 0x0 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 ----------- 00:11:16.780 Entry: 63 00:11:16.780 Error Count: 0x4c4c 00:11:16.780 Submission Queue Id: 0x2 00:11:16.780 Command Id: 0xffff 00:11:16.780 Phase Bit: 0 00:11:16.780 Status Code: 0x6 00:11:16.780 Status Code Type: 0x0 00:11:16.780 Do Not Retry: 1 00:11:16.780 Error Location: 0xffff 00:11:16.780 LBA: 0x0 00:11:16.780 Namespace: 0xffffffff 00:11:16.780 Vendor Log Page: 0x0 00:11:16.780 00:11:16.780 Arbitration 00:11:16.780 =========== 00:11:16.780 Arbitration Burst: 1 00:11:16.780 Low Priority Weight: 1 00:11:16.780 Medium Priority Weight: 1 00:11:16.780 High Priority Weight: 1 00:11:16.780 00:11:16.780 Power Management 00:11:16.780 ================ 00:11:16.780 Number of Power States: 1 00:11:16.780 Current Power State: Power State #0 00:11:16.780 Power State #0: 00:11:16.780 Max Power: 20.00 W 00:11:16.780 Non-Operational State: Operational 00:11:16.780 Entry Latency: Not Reported 00:11:16.780 Exit Latency: Not Reported 00:11:16.780 Relative Read Throughput: 0 00:11:16.780 Relative Read Latency: 0 00:11:16.780 Relative Write Throughput: 0 00:11:16.780 Relative Write Latency: 0 00:11:16.780 Idle Power: Not Reported 00:11:16.780 Active Power: Not Reported 00:11:16.780 Non-Operational Permissive Mode: Not Supported 00:11:16.780 00:11:16.780 Health Information 00:11:16.780 ================== 00:11:16.780 Critical Warnings: 00:11:16.780 Available Spare Space: OK 00:11:16.780 Temperature: OK 00:11:16.780 Device Reliability: OK 00:11:16.780 Read Only: No 00:11:16.780 Volatile Memory Backup: OK 00:11:16.780 Current Temperature: 310 Kelvin (37 Celsius) 00:11:16.780 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.780 Available Spare: 99% 00:11:16.780 Available Spare Threshold: 10% 00:11:16.780 Life Percentage Used: 17% 00:11:16.780 Data Units Read: 371083395 00:11:16.780 Data Units Written: 510491648 00:11:16.780 Host Read Commands: 22082840448 00:11:16.780 Host Write Commands: 25063102597 00:11:16.780 Controller Busy Time: 2526 minutes 00:11:16.780 Power Cycles: 28 00:11:16.780 Power On Hours: 15505 hours 00:11:16.780 Unsafe Shutdowns: 45 00:11:16.780 Unrecoverable Media Errors: 0 00:11:16.780 Lifetime Error Log Entries: 19595 00:11:16.780 Warning Temperature Time: 1188 minutes 00:11:16.780 Critical Temperature Time: 0 minutes 00:11:16.780 00:11:16.780 Number of Queues 00:11:16.780 ================ 00:11:16.780 Number of I/O Submission Queues: 128 00:11:16.780 Number of I/O Completion Queues: 128 00:11:16.780 00:11:16.780 Intel Health Information 00:11:16.780 ================== 00:11:16.780 Program Fail Count: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 5 00:11:16.780 Erase Fail Count: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 1 00:11:16.780 Wear Leveling Count: 00:11:16.780 Normalized Value : 83 00:11:16.780 Current Raw Value: 00:11:16.780 Min: 107 00:11:16.780 Max: 893 00:11:16.780 Avg: 831 00:11:16.780 End to End Error Detection Count: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 0 00:11:16.780 CRC Error Count: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 0 00:11:16.780 Timed Workload, Media Wear: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 65535 00:11:16.780 Timed Workload, Host Read/Write Ratio: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 65535% 00:11:16.780 Timed Workload, Timer: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 65535 00:11:16.780 Thermal Throttle Status: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 00:11:16.780 Percentage: 0% 00:11:16.780 Throttling Event Count: 1 00:11:16.780 Retry Buffer Overflow Counter: 00:11:16.780 Normalized Value : 100 00:11:16.780 Current Raw Value: 0 00:11:16.780 PLL Lock Loss Count: 00:11:16.781 Normalized Value : 100 00:11:16.781 Current Raw Value: 0 00:11:16.781 NAND Bytes Written: 00:11:16.781 Normalized Value : 100 00:11:16.781 Current Raw Value: 57507793 00:11:16.781 Host Bytes Written: 00:11:16.781 Normalized Value : 100 00:11:16.781 Current Raw Value: 7789484 00:11:16.781 00:11:16.781 Intel Temperature Information 00:11:16.781 ================== 00:11:16.781 Current Temperature: 37 00:11:16.781 Overtemp shutdown Flag for last critical component temperature: 0 00:11:16.781 Overtemp shutdown Flag for life critical component temperature: 0 00:11:16.781 Highest temperature: 73 00:11:16.781 Lowest temperature: 21 00:11:16.781 Specified Maximum Operating Temperature: 70 00:11:16.781 Specified Minimum Operating Temperature: 0 00:11:16.781 Estimated offset: 0 00:11:16.781 00:11:16.781 00:11:16.781 Intel Marketing Information 00:11:16.781 ================== 00:11:16.781 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:11:16.781 00:11:16.781 00:11:16.781 Active Namespaces 00:11:16.781 ================= 00:11:16.781 Namespace ID:1 00:11:16.781 Error Recovery Timeout: Unlimited 00:11:16.781 Command Set Identifier: NVM (00h) 00:11:16.781 Deallocate: Supported 00:11:16.781 Deallocated/Unwritten Error: Not Supported 00:11:16.781 Deallocated Read Value: All 0x00 00:11:16.781 Deallocate in Write Zeroes: Not Supported 00:11:16.781 Deallocated Guard Field: 0xFFFF 00:11:16.781 Flush: Not Supported 00:11:16.781 Reservation: Not Supported 00:11:16.781 Namespace Sharing Capabilities: Private 00:11:16.781 Size (in LBAs): 7814037168 (3726GiB) 00:11:16.781 Capacity (in LBAs): 7814037168 (3726GiB) 00:11:16.781 Utilization (in LBAs): 7814037168 (3726GiB) 00:11:16.781 NGUID: 010000000F3D00000000000000000000 00:11:16.781 EUI64: 0000000000000F3D 00:11:16.781 Thin Provisioning: Not Supported 00:11:16.781 Per-NS Atomic Units: No 00:11:16.781 NGUID/EUI64 Never Reused: No 00:11:16.781 Namespace Write Protected: No 00:11:16.781 Number of LBA Formats: 2 00:11:16.781 Current LBA Format: LBA Format #00 00:11:16.781 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.781 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:11:16.781 00:11:16.781 20:06:14 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:16.781 20:06:14 -- nvme/nvme.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:11:16.781 ===================================================== 00:11:16.781 NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:16.781 ===================================================== 00:11:16.781 Controller Capabilities/Features 00:11:16.781 ================================ 00:11:16.781 Vendor ID: 8086 00:11:16.781 Subsystem Vendor ID: 8086 00:11:16.781 Serial Number: BTLJ83030AK84P0DGN 00:11:16.781 Model Number: INTEL SSDPE2KX040T8 00:11:16.781 Firmware Version: VDV10184 00:11:16.781 Recommended Arb Burst: 0 00:11:16.781 IEEE OUI Identifier: e4 d2 5c 00:11:16.781 Multi-path I/O 00:11:16.781 May have multiple subsystem ports: No 00:11:16.781 May have multiple controllers: No 00:11:16.781 Associated with SR-IOV VF: No 00:11:16.781 Max Data Transfer Size: 131072 00:11:16.781 Max Number of Namespaces: 128 00:11:16.781 Max Number of I/O Queues: 128 00:11:16.781 NVMe Specification Version (VS): 1.2 00:11:16.781 NVMe Specification Version (Identify): 1.2 00:11:16.781 Maximum Queue Entries: 4096 00:11:16.781 Contiguous Queues Required: Yes 00:11:16.781 Arbitration Mechanisms Supported 00:11:16.781 Weighted Round Robin: Supported 00:11:16.781 Vendor Specific: Not Supported 00:11:16.781 Reset Timeout: 60000 ms 00:11:16.781 Doorbell Stride: 4 bytes 00:11:16.781 NVM Subsystem Reset: Not Supported 00:11:16.781 Command Sets Supported 00:11:16.781 NVM Command Set: Supported 00:11:16.781 Boot Partition: Not Supported 00:11:16.781 Memory Page Size Minimum: 4096 bytes 00:11:16.781 Memory Page Size Maximum: 4096 bytes 00:11:16.781 Persistent Memory Region: Not Supported 00:11:16.781 Optional Asynchronous Events Supported 00:11:16.781 Namespace Attribute Notices: Not Supported 00:11:16.781 Firmware Activation Notices: Supported 00:11:16.781 ANA Change Notices: Not Supported 00:11:16.781 PLE Aggregate Log Change Notices: Not Supported 00:11:16.781 LBA Status Info Alert Notices: Not Supported 00:11:16.781 EGE Aggregate Log Change Notices: Not Supported 00:11:16.781 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.781 Zone Descriptor Change Notices: Not Supported 00:11:16.781 Discovery Log Change Notices: Not Supported 00:11:16.781 Controller Attributes 00:11:16.781 128-bit Host Identifier: Not Supported 00:11:16.781 Non-Operational Permissive Mode: Not Supported 00:11:16.781 NVM Sets: Not Supported 00:11:16.781 Read Recovery Levels: Not Supported 00:11:16.781 Endurance Groups: Not Supported 00:11:16.781 Predictable Latency Mode: Not Supported 00:11:16.781 Traffic Based Keep ALive: Not Supported 00:11:16.781 Namespace Granularity: Not Supported 00:11:16.781 SQ Associations: Not Supported 00:11:16.781 UUID List: Not Supported 00:11:16.781 Multi-Domain Subsystem: Not Supported 00:11:16.781 Fixed Capacity Management: Not Supported 00:11:16.781 Variable Capacity Management: Not Supported 00:11:16.781 Delete Endurance Group: Not Supported 00:11:16.781 Delete NVM Set: Not Supported 00:11:16.781 Extended LBA Formats Supported: Not Supported 00:11:16.781 Flexible Data Placement Supported: Not Supported 00:11:16.781 00:11:16.781 Controller Memory Buffer Support 00:11:16.781 ================================ 00:11:16.781 Supported: No 00:11:16.781 00:11:16.781 Persistent Memory Region Support 00:11:16.781 ================================ 00:11:16.781 Supported: No 00:11:16.781 00:11:16.781 Admin Command Set Attributes 00:11:16.781 ============================ 00:11:16.781 Security Send/Receive: Not Supported 00:11:16.781 Format NVM: Supported 00:11:16.781 Firmware Activate/Download: Supported 00:11:16.781 Namespace Management: Supported 00:11:16.781 Device Self-Test: Not Supported 00:11:16.781 Directives: Not Supported 00:11:16.781 NVMe-MI: Not Supported 00:11:16.781 Virtualization Management: Not Supported 00:11:16.781 Doorbell Buffer Config: Not Supported 00:11:16.781 Get LBA Status Capability: Not Supported 00:11:16.781 Command & Feature Lockdown Capability: Not Supported 00:11:16.781 Abort Command Limit: 4 00:11:16.781 Async Event Request Limit: 4 00:11:16.781 Number of Firmware Slots: 4 00:11:16.781 Firmware Slot 1 Read-Only: No 00:11:16.781 Firmware Activation Without Reset: Yes 00:11:16.781 Multiple Update Detection Support: No 00:11:16.781 Firmware Update Granularity: No Information Provided 00:11:16.781 Per-Namespace SMART Log: No 00:11:16.781 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.781 Subsystem NQN: 00:11:16.781 Command Effects Log Page: Supported 00:11:16.781 Get Log Page Extended Data: Supported 00:11:16.781 Telemetry Log Pages: Supported 00:11:16.781 Persistent Event Log Pages: Not Supported 00:11:16.782 Supported Log Pages Log Page: May Support 00:11:16.782 Commands Supported & Effects Log Page: Not Supported 00:11:16.782 Feature Identifiers & Effects Log Page:May Support 00:11:16.782 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.782 Data Area 4 for Telemetry Log: Not Supported 00:11:16.782 Error Log Page Entries Supported: 64 00:11:16.782 Keep Alive: Not Supported 00:11:16.782 00:11:16.782 NVM Command Set Attributes 00:11:16.782 ========================== 00:11:16.782 Submission Queue Entry Size 00:11:16.782 Max: 64 00:11:16.782 Min: 64 00:11:16.782 Completion Queue Entry Size 00:11:16.782 Max: 16 00:11:16.782 Min: 16 00:11:16.782 Number of Namespaces: 128 00:11:16.782 Compare Command: Not Supported 00:11:16.782 Write Uncorrectable Command: Supported 00:11:16.782 Dataset Management Command: Supported 00:11:16.782 Write Zeroes Command: Not Supported 00:11:16.782 Set Features Save Field: Not Supported 00:11:16.782 Reservations: Not Supported 00:11:16.782 Timestamp: Not Supported 00:11:16.782 Copy: Not Supported 00:11:16.782 Volatile Write Cache: Not Present 00:11:16.782 Atomic Write Unit (Normal): 1 00:11:16.782 Atomic Write Unit (PFail): 1 00:11:16.782 Atomic Compare & Write Unit: 1 00:11:16.782 Fused Compare & Write: Not Supported 00:11:16.782 Scatter-Gather List 00:11:16.782 SGL Command Set: Not Supported 00:11:16.782 SGL Keyed: Not Supported 00:11:16.782 SGL Bit Bucket Descriptor: Not Supported 00:11:16.782 SGL Metadata Pointer: Not Supported 00:11:16.782 Oversized SGL: Not Supported 00:11:16.782 SGL Metadata Address: Not Supported 00:11:16.782 SGL Offset: Not Supported 00:11:16.782 Transport SGL Data Block: Not Supported 00:11:16.782 Replay Protected Memory Block: Not Supported 00:11:16.782 00:11:16.782 Firmware Slot Information 00:11:16.782 ========================= 00:11:16.782 Active slot: 1 00:11:16.782 Slot 1 Firmware Revision: VDV10184 00:11:16.782 00:11:16.782 00:11:16.782 Commands Supported and Effects 00:11:16.782 ============================== 00:11:16.782 Admin Commands 00:11:16.782 -------------- 00:11:16.782 Delete I/O Submission Queue (00h): Supported 00:11:16.782 Create I/O Submission Queue (01h): Supported All-NS-Exclusive 00:11:16.782 Get Log Page (02h): Supported 00:11:16.782 Delete I/O Completion Queue (04h): Supported 00:11:16.782 Create I/O Completion Queue (05h): Supported All-NS-Exclusive 00:11:16.782 Identify (06h): Supported 00:11:16.782 Abort (08h): Supported 00:11:16.782 Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 00:11:16.782 Get Features (0Ah): Supported 00:11:16.782 Asynchronous Event Request (0Ch): Supported 00:11:16.782 Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive 00:11:16.782 Firmware Commit (10h): Supported Ctrlr-Cap-Change 00:11:16.782 Firmware Image Download (11h): Supported 00:11:16.782 Namespace Attachment (15h): Supported Per-NS-Exclusive 00:11:16.782 Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive 00:11:16.782 Vendor specific (C8h): Supported 00:11:16.782 Vendor specific (D2h): Supported 00:11:16.782 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:11:16.782 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive 00:11:16.782 I/O Commands 00:11:16.782 ------------ 00:11:16.782 Flush (00h): Supported LBA-Change 00:11:16.782 Write (01h): Supported LBA-Change 00:11:16.782 Read (02h): Supported 00:11:16.782 Write Uncorrectable (04h): Supported LBA-Change 00:11:16.782 Dataset Management (09h): Supported LBA-Change 00:11:16.782 00:11:16.782 Error Log 00:11:16.782 ========= 00:11:16.782 Entry: 0 00:11:16.782 Error Count: 0x4c8b 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 1 00:11:16.782 Error Count: 0x4c8a 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 2 00:11:16.782 Error Count: 0x4c89 00:11:16.782 Submission Queue Id: 0x0 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 3 00:11:16.782 Error Count: 0x4c88 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 4 00:11:16.782 Error Count: 0x4c87 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 5 00:11:16.782 Error Count: 0x4c86 00:11:16.782 Submission Queue Id: 0x0 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 6 00:11:16.782 Error Count: 0x4c85 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 7 00:11:16.782 Error Count: 0x4c84 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 8 00:11:16.782 Error Count: 0x4c83 00:11:16.782 Submission Queue Id: 0x0 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 9 00:11:16.782 Error Count: 0x4c82 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 10 00:11:16.782 Error Count: 0x4c81 00:11:16.782 Submission Queue Id: 0x2 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.782 LBA: 0x0 00:11:16.782 Namespace: 0xffffffff 00:11:16.782 Vendor Log Page: 0x0 00:11:16.782 ----------- 00:11:16.782 Entry: 11 00:11:16.782 Error Count: 0x4c80 00:11:16.782 Submission Queue Id: 0x0 00:11:16.782 Command Id: 0xffff 00:11:16.782 Phase Bit: 0 00:11:16.782 Status Code: 0x6 00:11:16.782 Status Code Type: 0x0 00:11:16.782 Do Not Retry: 1 00:11:16.782 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 12 00:11:16.783 Error Count: 0x4c7f 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 13 00:11:16.783 Error Count: 0x4c7e 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 14 00:11:16.783 Error Count: 0x4c7d 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 15 00:11:16.783 Error Count: 0x4c7c 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 16 00:11:16.783 Error Count: 0x4c7b 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 17 00:11:16.783 Error Count: 0x4c7a 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 18 00:11:16.783 Error Count: 0x4c79 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 19 00:11:16.783 Error Count: 0x4c78 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 20 00:11:16.783 Error Count: 0x4c77 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 21 00:11:16.783 Error Count: 0x4c76 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 22 00:11:16.783 Error Count: 0x4c75 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 23 00:11:16.783 Error Count: 0x4c74 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 24 00:11:16.783 Error Count: 0x4c73 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 25 00:11:16.783 Error Count: 0x4c72 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 26 00:11:16.783 Error Count: 0x4c71 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 27 00:11:16.783 Error Count: 0x4c70 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 28 00:11:16.783 Error Count: 0x4c6f 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 29 00:11:16.783 Error Count: 0x4c6e 00:11:16.783 Submission Queue Id: 0x0 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 30 00:11:16.783 Error Count: 0x4c6d 00:11:16.783 Submission Queue Id: 0x2 00:11:16.783 Command Id: 0xffff 00:11:16.783 Phase Bit: 0 00:11:16.783 Status Code: 0x6 00:11:16.783 Status Code Type: 0x0 00:11:16.783 Do Not Retry: 1 00:11:16.783 Error Location: 0xffff 00:11:16.783 LBA: 0x0 00:11:16.783 Namespace: 0xffffffff 00:11:16.783 Vendor Log Page: 0x0 00:11:16.783 ----------- 00:11:16.783 Entry: 31 00:11:16.783 Error Count: 0x4c6c 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 32 00:11:16.784 Error Count: 0x4c6b 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 33 00:11:16.784 Error Count: 0x4c6a 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 34 00:11:16.784 Error Count: 0x4c69 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 35 00:11:16.784 Error Count: 0x4c68 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 36 00:11:16.784 Error Count: 0x4c67 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 37 00:11:16.784 Error Count: 0x4c66 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 38 00:11:16.784 Error Count: 0x4c65 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 39 00:11:16.784 Error Count: 0x4c64 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 40 00:11:16.784 Error Count: 0x4c63 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 41 00:11:16.784 Error Count: 0x4c62 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 42 00:11:16.784 Error Count: 0x4c61 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 43 00:11:16.784 Error Count: 0x4c60 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 44 00:11:16.784 Error Count: 0x4c5f 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 45 00:11:16.784 Error Count: 0x4c5e 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 46 00:11:16.784 Error Count: 0x4c5d 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 47 00:11:16.784 Error Count: 0x4c5c 00:11:16.784 Submission Queue Id: 0x0 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 48 00:11:16.784 Error Count: 0x4c5b 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 49 00:11:16.784 Error Count: 0x4c5a 00:11:16.784 Submission Queue Id: 0x2 00:11:16.784 Command Id: 0xffff 00:11:16.784 Phase Bit: 0 00:11:16.784 Status Code: 0x6 00:11:16.784 Status Code Type: 0x0 00:11:16.784 Do Not Retry: 1 00:11:16.784 Error Location: 0xffff 00:11:16.784 LBA: 0x0 00:11:16.784 Namespace: 0xffffffff 00:11:16.784 Vendor Log Page: 0x0 00:11:16.784 ----------- 00:11:16.784 Entry: 50 00:11:16.785 Error Count: 0x4c59 00:11:16.785 Submission Queue Id: 0x0 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 51 00:11:16.785 Error Count: 0x4c58 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 52 00:11:16.785 Error Count: 0x4c57 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 53 00:11:16.785 Error Count: 0x4c56 00:11:16.785 Submission Queue Id: 0x0 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 54 00:11:16.785 Error Count: 0x4c55 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 55 00:11:16.785 Error Count: 0x4c54 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 56 00:11:16.785 Error Count: 0x4c53 00:11:16.785 Submission Queue Id: 0x0 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 57 00:11:16.785 Error Count: 0x4c52 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 58 00:11:16.785 Error Count: 0x4c51 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 59 00:11:16.785 Error Count: 0x4c50 00:11:16.785 Submission Queue Id: 0x0 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 60 00:11:16.785 Error Count: 0x4c4f 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 61 00:11:16.785 Error Count: 0x4c4e 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 62 00:11:16.785 Error Count: 0x4c4d 00:11:16.785 Submission Queue Id: 0x0 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 ----------- 00:11:16.785 Entry: 63 00:11:16.785 Error Count: 0x4c4c 00:11:16.785 Submission Queue Id: 0x2 00:11:16.785 Command Id: 0xffff 00:11:16.785 Phase Bit: 0 00:11:16.785 Status Code: 0x6 00:11:16.785 Status Code Type: 0x0 00:11:16.785 Do Not Retry: 1 00:11:16.785 Error Location: 0xffff 00:11:16.785 LBA: 0x0 00:11:16.785 Namespace: 0xffffffff 00:11:16.785 Vendor Log Page: 0x0 00:11:16.785 00:11:16.785 Arbitration 00:11:16.785 =========== 00:11:16.785 Arbitration Burst: 1 00:11:16.785 Low Priority Weight: 1 00:11:16.785 Medium Priority Weight: 1 00:11:16.785 High Priority Weight: 1 00:11:16.785 00:11:16.785 Power Management 00:11:16.785 ================ 00:11:16.785 Number of Power States: 1 00:11:16.785 Current Power State: Power State #0 00:11:16.785 Power State #0: 00:11:16.785 Max Power: 20.00 W 00:11:16.785 Non-Operational State: Operational 00:11:16.785 Entry Latency: Not Reported 00:11:16.785 Exit Latency: Not Reported 00:11:16.785 Relative Read Throughput: 0 00:11:16.785 Relative Read Latency: 0 00:11:16.785 Relative Write Throughput: 0 00:11:16.785 Relative Write Latency: 0 00:11:16.785 Idle Power: Not Reported 00:11:16.785 Active Power: Not Reported 00:11:16.785 Non-Operational Permissive Mode: Not Supported 00:11:16.785 00:11:16.785 Health Information 00:11:16.785 ================== 00:11:16.785 Critical Warnings: 00:11:16.785 Available Spare Space: OK 00:11:16.785 Temperature: OK 00:11:16.785 Device Reliability: OK 00:11:16.785 Read Only: No 00:11:16.785 Volatile Memory Backup: OK 00:11:16.785 Current Temperature: 310 Kelvin (37 Celsius) 00:11:16.785 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.785 Available Spare: 99% 00:11:16.785 Available Spare Threshold: 10% 00:11:16.785 Life Percentage Used: 17% 00:11:16.785 Data Units Read: 371083395 00:11:16.785 Data Units Written: 510491648 00:11:16.785 Host Read Commands: 22082840448 00:11:16.785 Host Write Commands: 25063102597 00:11:16.785 Controller Busy Time: 2526 minutes 00:11:16.785 Power Cycles: 28 00:11:16.785 Power On Hours: 15505 hours 00:11:16.785 Unsafe Shutdowns: 45 00:11:16.785 Unrecoverable Media Errors: 0 00:11:16.785 Lifetime Error Log Entries: 19595 00:11:16.785 Warning Temperature Time: 1188 minutes 00:11:16.785 Critical Temperature Time: 0 minutes 00:11:16.785 00:11:16.785 Number of Queues 00:11:16.785 ================ 00:11:16.785 Number of I/O Submission Queues: 128 00:11:16.785 Number of I/O Completion Queues: 128 00:11:16.785 00:11:16.785 Intel Health Information 00:11:16.785 ================== 00:11:16.785 Program Fail Count: 00:11:16.785 Normalized Value : 100 00:11:16.785 Current Raw Value: 5 00:11:16.785 Erase Fail Count: 00:11:16.785 Normalized Value : 100 00:11:16.785 Current Raw Value: 1 00:11:16.785 Wear Leveling Count: 00:11:16.785 Normalized Value : 83 00:11:16.785 Current Raw Value: 00:11:16.786 Min: 107 00:11:16.786 Max: 893 00:11:16.786 Avg: 831 00:11:16.786 End to End Error Detection Count: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 0 00:11:16.786 CRC Error Count: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 0 00:11:16.786 Timed Workload, Media Wear: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 65535 00:11:16.786 Timed Workload, Host Read/Write Ratio: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 65535% 00:11:16.786 Timed Workload, Timer: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 65535 00:11:16.786 Thermal Throttle Status: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 00:11:16.786 Percentage: 0% 00:11:16.786 Throttling Event Count: 1 00:11:16.786 Retry Buffer Overflow Counter: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 0 00:11:16.786 PLL Lock Loss Count: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 0 00:11:16.786 NAND Bytes Written: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 57507793 00:11:16.786 Host Bytes Written: 00:11:16.786 Normalized Value : 100 00:11:16.786 Current Raw Value: 7789484 00:11:16.786 00:11:16.786 Intel Temperature Information 00:11:16.786 ================== 00:11:16.786 Current Temperature: 37 00:11:16.786 Overtemp shutdown Flag for last critical component temperature: 0 00:11:16.786 Overtemp shutdown Flag for life critical component temperature: 0 00:11:16.786 Highest temperature: 73 00:11:16.786 Lowest temperature: 21 00:11:16.786 Specified Maximum Operating Temperature: 70 00:11:16.786 Specified Minimum Operating Temperature: 0 00:11:16.786 Estimated offset: 0 00:11:16.786 00:11:16.786 00:11:16.786 Intel Marketing Information 00:11:16.786 ================== 00:11:16.786 Marketing Product Information: Intel(R) SSD DC P4510 Series 00:11:16.786 00:11:16.786 00:11:16.786 Active Namespaces 00:11:16.786 ================= 00:11:16.786 Namespace ID:1 00:11:16.786 Error Recovery Timeout: Unlimited 00:11:16.786 Command Set Identifier: NVM (00h) 00:11:16.786 Deallocate: Supported 00:11:16.786 Deallocated/Unwritten Error: Not Supported 00:11:16.786 Deallocated Read Value: All 0x00 00:11:16.786 Deallocate in Write Zeroes: Not Supported 00:11:16.786 Deallocated Guard Field: 0xFFFF 00:11:16.786 Flush: Not Supported 00:11:16.786 Reservation: Not Supported 00:11:16.786 Namespace Sharing Capabilities: Private 00:11:16.786 Size (in LBAs): 7814037168 (3726GiB) 00:11:16.786 Capacity (in LBAs): 7814037168 (3726GiB) 00:11:16.786 Utilization (in LBAs): 7814037168 (3726GiB) 00:11:16.786 NGUID: 010000000F3D00000000000000000000 00:11:16.786 EUI64: 0000000000000F3D 00:11:16.786 Thin Provisioning: Not Supported 00:11:16.786 Per-NS Atomic Units: No 00:11:16.786 NGUID/EUI64 Never Reused: No 00:11:16.786 Namespace Write Protected: No 00:11:16.786 Number of LBA Formats: 2 00:11:16.786 Current LBA Format: LBA Format #00 00:11:16.786 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.786 LBA Format #01: Data Size: 4096 Metadata Size: 0 00:11:16.786 00:11:16.786 00:11:16.786 real 0m0.749s 00:11:16.786 user 0m0.222s 00:11:16.786 sys 0m0.436s 00:11:16.786 20:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:16.786 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.786 ************************************ 00:11:16.786 END TEST nvme_identify 00:11:16.786 ************************************ 00:11:16.786 20:06:14 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:16.786 20:06:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:16.786 20:06:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:16.786 20:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:16.786 ************************************ 00:11:16.786 START TEST nvme_perf 00:11:16.786 ************************************ 00:11:16.786 20:06:14 -- common/autotest_common.sh@1104 -- # nvme_perf 00:11:16.786 20:06:14 -- nvme/nvme.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:18.167 Initializing NVMe Controllers 00:11:18.167 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:18.167 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:11:18.167 Initialization complete. Launching workers. 00:11:18.167 ======================================================== 00:11:18.167 Latency(us) 00:11:18.167 Device Information : IOPS MiB/s Average min max 00:11:18.167 PCIE (0000:5e:00.0) NSID 1 from core 0: 103629.46 1214.41 1234.68 68.56 3110.11 00:11:18.167 ======================================================== 00:11:18.167 Total : 103629.46 1214.41 1234.68 68.56 3110.11 00:11:18.167 00:11:18.167 Summary latency data for PCIE (0000:5e:00.0) NSID 1 from core 0: 00:11:18.167 ================================================================================= 00:11:18.167 1.00000% : 223.499us 00:11:18.167 10.00000% : 541.384us 00:11:18.167 25.00000% : 812.077us 00:11:18.167 50.00000% : 1225.238us 00:11:18.167 75.00000% : 1645.523us 00:11:18.167 90.00000% : 1951.833us 00:11:18.167 95.00000% : 2108.550us 00:11:18.167 98.00000% : 2279.513us 00:11:18.167 99.00000% : 2407.736us 00:11:18.167 99.50000% : 2521.711us 00:11:18.167 99.90000% : 2721.169us 00:11:18.167 99.99000% : 2906.379us 00:11:18.167 99.99900% : 3077.343us 00:11:18.167 99.99990% : 3120.083us 00:11:18.167 99.99999% : 3120.083us 00:11:18.167 00:11:18.167 Latency histogram for PCIE (0000:5e:00.0) NSID 1 from core 0: 00:11:18.167 ============================================================================== 00:11:18.167 Range in us Cumulative IO count 00:11:18.167 68.118 - 68.563: 0.0010% ( 1) 00:11:18.167 72.570 - 73.016: 0.0019% ( 1) 00:11:18.167 73.016 - 73.461: 0.0029% ( 1) 00:11:18.167 73.906 - 74.351: 0.0039% ( 1) 00:11:18.167 80.139 - 80.584: 0.0058% ( 2) 00:11:18.167 80.584 - 81.030: 0.0068% ( 1) 00:11:18.167 81.030 - 81.475: 0.0077% ( 1) 00:11:18.167 83.701 - 84.146: 0.0096% ( 2) 00:11:18.167 84.146 - 84.591: 0.0106% ( 1) 00:11:18.167 84.591 - 85.037: 0.0116% ( 1) 00:11:18.167 85.482 - 85.927: 0.0135% ( 2) 00:11:18.167 86.817 - 87.263: 0.0145% ( 1) 00:11:18.167 87.263 - 87.708: 0.0154% ( 1) 00:11:18.167 89.043 - 89.489: 0.0164% ( 1) 00:11:18.167 89.489 - 89.934: 0.0174% ( 1) 00:11:18.167 89.934 - 90.379: 0.0183% ( 1) 00:11:18.167 90.824 - 91.270: 0.0193% ( 1) 00:11:18.167 91.270 - 91.715: 0.0203% ( 1) 00:11:18.167 91.715 - 92.160: 0.0222% ( 2) 00:11:18.167 92.160 - 92.605: 0.0232% ( 1) 00:11:18.167 93.941 - 94.386: 0.0241% ( 1) 00:11:18.167 94.831 - 95.277: 0.0251% ( 1) 00:11:18.167 95.277 - 95.722: 0.0260% ( 1) 00:11:18.167 97.057 - 97.503: 0.0280% ( 2) 00:11:18.167 97.503 - 97.948: 0.0299% ( 2) 00:11:18.167 100.619 - 101.064: 0.0309% ( 1) 00:11:18.167 101.510 - 101.955: 0.0318% ( 1) 00:11:18.168 101.955 - 102.400: 0.0328% ( 1) 00:11:18.168 102.400 - 102.845: 0.0357% ( 3) 00:11:18.168 102.845 - 103.290: 0.0367% ( 1) 00:11:18.168 104.181 - 104.626: 0.0376% ( 1) 00:11:18.168 104.626 - 105.071: 0.0386% ( 1) 00:11:18.168 105.071 - 105.517: 0.0396% ( 1) 00:11:18.168 106.852 - 107.297: 0.0405% ( 1) 00:11:18.168 108.188 - 108.633: 0.0415% ( 1) 00:11:18.168 108.633 - 109.078: 0.0434% ( 2) 00:11:18.168 109.078 - 109.523: 0.0463% ( 3) 00:11:18.168 109.969 - 110.414: 0.0473% ( 1) 00:11:18.168 110.414 - 110.859: 0.0492% ( 2) 00:11:18.168 110.859 - 111.304: 0.0502% ( 1) 00:11:18.168 111.304 - 111.750: 0.0531% ( 3) 00:11:18.168 111.750 - 112.195: 0.0560% ( 3) 00:11:18.168 112.195 - 112.640: 0.0569% ( 1) 00:11:18.168 112.640 - 113.085: 0.0588% ( 2) 00:11:18.168 113.085 - 113.530: 0.0608% ( 2) 00:11:18.168 113.976 - 114.866: 0.0646% ( 4) 00:11:18.168 114.866 - 115.757: 0.0695% ( 5) 00:11:18.168 115.757 - 116.647: 0.0743% ( 5) 00:11:18.168 116.647 - 117.537: 0.0830% ( 9) 00:11:18.168 117.537 - 118.428: 0.0849% ( 2) 00:11:18.168 118.428 - 119.318: 0.0887% ( 4) 00:11:18.168 119.318 - 120.209: 0.0897% ( 1) 00:11:18.168 120.209 - 121.099: 0.0936% ( 4) 00:11:18.168 121.099 - 121.990: 0.0974% ( 4) 00:11:18.168 121.990 - 122.880: 0.1032% ( 6) 00:11:18.168 122.880 - 123.770: 0.1061% ( 3) 00:11:18.168 123.770 - 124.661: 0.1129% ( 7) 00:11:18.168 124.661 - 125.551: 0.1167% ( 4) 00:11:18.168 125.551 - 126.442: 0.1206% ( 4) 00:11:18.168 126.442 - 127.332: 0.1264% ( 6) 00:11:18.168 127.332 - 128.223: 0.1302% ( 4) 00:11:18.168 128.223 - 129.113: 0.1341% ( 4) 00:11:18.168 129.113 - 130.003: 0.1399% ( 6) 00:11:18.168 130.003 - 130.894: 0.1447% ( 5) 00:11:18.168 130.894 - 131.784: 0.1495% ( 5) 00:11:18.168 131.784 - 132.675: 0.1534% ( 4) 00:11:18.168 132.675 - 133.565: 0.1582% ( 5) 00:11:18.168 133.565 - 134.456: 0.1650% ( 7) 00:11:18.168 134.456 - 135.346: 0.1717% ( 7) 00:11:18.168 135.346 - 136.237: 0.1785% ( 7) 00:11:18.168 136.237 - 137.127: 0.1833% ( 5) 00:11:18.168 137.127 - 138.017: 0.1949% ( 12) 00:11:18.168 138.017 - 138.908: 0.1987% ( 4) 00:11:18.168 138.908 - 139.798: 0.2055% ( 7) 00:11:18.168 139.798 - 140.689: 0.2103% ( 5) 00:11:18.168 140.689 - 141.579: 0.2161% ( 6) 00:11:18.168 141.579 - 142.470: 0.2248% ( 9) 00:11:18.168 143.360 - 144.250: 0.2325% ( 8) 00:11:18.168 144.250 - 145.141: 0.2383% ( 6) 00:11:18.168 145.141 - 146.031: 0.2450% ( 7) 00:11:18.168 146.031 - 146.922: 0.2489% ( 4) 00:11:18.168 146.922 - 147.812: 0.2556% ( 7) 00:11:18.168 147.812 - 148.703: 0.2605% ( 5) 00:11:18.168 148.703 - 149.593: 0.2672% ( 7) 00:11:18.168 149.593 - 150.483: 0.2759% ( 9) 00:11:18.168 150.483 - 151.374: 0.2817% ( 6) 00:11:18.168 151.374 - 152.264: 0.2923% ( 11) 00:11:18.168 152.264 - 153.155: 0.2990% ( 7) 00:11:18.168 153.155 - 154.045: 0.3058% ( 7) 00:11:18.168 154.045 - 154.936: 0.3126% ( 7) 00:11:18.168 154.936 - 155.826: 0.3154% ( 3) 00:11:18.168 155.826 - 156.717: 0.3222% ( 7) 00:11:18.168 156.717 - 157.607: 0.3280% ( 6) 00:11:18.168 157.607 - 158.497: 0.3386% ( 11) 00:11:18.168 158.497 - 159.388: 0.3463% ( 8) 00:11:18.168 159.388 - 160.278: 0.3521% ( 6) 00:11:18.168 160.278 - 161.169: 0.3589% ( 7) 00:11:18.168 161.169 - 162.059: 0.3627% ( 4) 00:11:18.168 162.059 - 162.950: 0.3733% ( 11) 00:11:18.168 162.950 - 163.840: 0.3820% ( 9) 00:11:18.168 163.840 - 164.730: 0.3849% ( 3) 00:11:18.168 164.730 - 165.621: 0.3897% ( 5) 00:11:18.168 165.621 - 166.511: 0.3926% ( 3) 00:11:18.168 166.511 - 167.402: 0.3955% ( 3) 00:11:18.168 167.402 - 168.292: 0.4042% ( 9) 00:11:18.168 168.292 - 169.183: 0.4158% ( 12) 00:11:18.168 169.183 - 170.073: 0.4235% ( 8) 00:11:18.168 170.073 - 170.963: 0.4322% ( 9) 00:11:18.168 170.963 - 171.854: 0.4360% ( 4) 00:11:18.168 171.854 - 172.744: 0.4437% ( 8) 00:11:18.168 172.744 - 173.635: 0.4524% ( 9) 00:11:18.168 173.635 - 174.525: 0.4621% ( 10) 00:11:18.168 174.525 - 175.416: 0.4688% ( 7) 00:11:18.168 175.416 - 176.306: 0.4746% ( 6) 00:11:18.168 176.306 - 177.197: 0.4814% ( 7) 00:11:18.168 177.197 - 178.087: 0.4843% ( 3) 00:11:18.168 178.087 - 178.977: 0.4978% ( 14) 00:11:18.168 178.977 - 179.868: 0.5055% ( 8) 00:11:18.168 179.868 - 180.758: 0.5180% ( 13) 00:11:18.168 180.758 - 181.649: 0.5248% ( 7) 00:11:18.168 181.649 - 182.539: 0.5315% ( 7) 00:11:18.168 182.539 - 183.430: 0.5412% ( 10) 00:11:18.168 183.430 - 184.320: 0.5441% ( 3) 00:11:18.168 184.320 - 185.210: 0.5518% ( 8) 00:11:18.168 185.210 - 186.101: 0.5557% ( 4) 00:11:18.168 186.101 - 186.991: 0.5701% ( 15) 00:11:18.168 186.991 - 187.882: 0.5807% ( 11) 00:11:18.168 187.882 - 188.772: 0.5885% ( 8) 00:11:18.168 188.772 - 189.663: 0.5981% ( 10) 00:11:18.168 189.663 - 190.553: 0.6097% ( 12) 00:11:18.168 190.553 - 191.443: 0.6135% ( 4) 00:11:18.168 191.443 - 192.334: 0.6270% ( 14) 00:11:18.168 192.334 - 193.224: 0.6348% ( 8) 00:11:18.168 193.224 - 194.115: 0.6425% ( 8) 00:11:18.168 194.115 - 195.005: 0.6540% ( 12) 00:11:18.168 195.005 - 195.896: 0.6608% ( 7) 00:11:18.168 195.896 - 196.786: 0.6656% ( 5) 00:11:18.168 196.786 - 197.677: 0.6743% ( 9) 00:11:18.168 197.677 - 198.567: 0.6849% ( 11) 00:11:18.168 198.567 - 199.457: 0.6926% ( 8) 00:11:18.168 199.457 - 200.348: 0.6984% ( 6) 00:11:18.168 200.348 - 201.238: 0.7061% ( 8) 00:11:18.168 201.238 - 202.129: 0.7254% ( 20) 00:11:18.168 202.129 - 203.019: 0.7409% ( 16) 00:11:18.168 203.019 - 203.910: 0.7505% ( 10) 00:11:18.168 203.910 - 204.800: 0.7660% ( 16) 00:11:18.168 204.800 - 205.690: 0.7785% ( 13) 00:11:18.168 205.690 - 206.581: 0.7910% ( 13) 00:11:18.168 206.581 - 207.471: 0.8045% ( 14) 00:11:18.168 207.471 - 208.362: 0.8200% ( 16) 00:11:18.168 208.362 - 209.252: 0.8287% ( 9) 00:11:18.168 209.252 - 210.143: 0.8393% ( 11) 00:11:18.168 210.143 - 211.033: 0.8489% ( 10) 00:11:18.168 211.033 - 211.923: 0.8576% ( 9) 00:11:18.168 211.923 - 212.814: 0.8692% ( 12) 00:11:18.168 212.814 - 213.704: 0.8759% ( 7) 00:11:18.168 213.704 - 214.595: 0.8904% ( 15) 00:11:18.168 214.595 - 215.485: 0.9097% ( 20) 00:11:18.168 215.485 - 216.376: 0.9213% ( 12) 00:11:18.168 216.376 - 217.266: 0.9328% ( 12) 00:11:18.168 217.266 - 218.157: 0.9463% ( 14) 00:11:18.169 218.157 - 219.047: 0.9550% ( 9) 00:11:18.169 219.047 - 219.937: 0.9647% ( 10) 00:11:18.169 219.937 - 220.828: 0.9753% ( 11) 00:11:18.169 220.828 - 221.718: 0.9888% ( 14) 00:11:18.169 221.718 - 222.609: 0.9984% ( 10) 00:11:18.169 222.609 - 223.499: 1.0139% ( 16) 00:11:18.169 223.499 - 224.390: 1.0245% ( 11) 00:11:18.169 224.390 - 225.280: 1.0370% ( 13) 00:11:18.169 225.280 - 226.170: 1.0563% ( 20) 00:11:18.169 226.170 - 227.061: 1.0708% ( 15) 00:11:18.169 227.061 - 227.951: 1.0901% ( 20) 00:11:18.169 227.951 - 229.732: 1.1181% ( 29) 00:11:18.169 229.732 - 231.513: 1.1441% ( 27) 00:11:18.169 231.513 - 233.294: 1.1634% ( 20) 00:11:18.169 233.294 - 235.075: 1.1875% ( 25) 00:11:18.169 235.075 - 236.856: 1.2116% ( 25) 00:11:18.169 236.856 - 238.637: 1.2280% ( 17) 00:11:18.169 238.637 - 240.417: 1.2444% ( 17) 00:11:18.169 240.417 - 242.198: 1.2705% ( 27) 00:11:18.169 242.198 - 243.979: 1.3013% ( 32) 00:11:18.169 243.979 - 245.760: 1.3216% ( 21) 00:11:18.169 245.760 - 247.541: 1.3370% ( 16) 00:11:18.169 247.541 - 249.322: 1.3534% ( 17) 00:11:18.169 249.322 - 251.103: 1.3833% ( 31) 00:11:18.169 251.103 - 252.883: 1.4161% ( 34) 00:11:18.169 252.883 - 254.664: 1.4422% ( 27) 00:11:18.169 254.664 - 256.445: 1.4750% ( 34) 00:11:18.169 256.445 - 258.226: 1.5010% ( 27) 00:11:18.169 258.226 - 260.007: 1.5367% ( 37) 00:11:18.169 260.007 - 261.788: 1.5647% ( 29) 00:11:18.169 261.788 - 263.569: 1.5907% ( 27) 00:11:18.169 263.569 - 265.350: 1.6207% ( 31) 00:11:18.169 265.350 - 267.130: 1.6535% ( 34) 00:11:18.169 267.130 - 268.911: 1.6805% ( 28) 00:11:18.169 268.911 - 270.692: 1.7104% ( 31) 00:11:18.169 270.692 - 272.473: 1.7374% ( 28) 00:11:18.169 272.473 - 274.254: 1.7721% ( 36) 00:11:18.169 274.254 - 276.035: 1.8097% ( 39) 00:11:18.169 276.035 - 277.816: 1.8425% ( 34) 00:11:18.169 277.816 - 279.597: 1.8666% ( 25) 00:11:18.169 279.597 - 281.377: 1.8956% ( 30) 00:11:18.169 281.377 - 283.158: 1.9226% ( 28) 00:11:18.169 283.158 - 284.939: 1.9612% ( 40) 00:11:18.169 284.939 - 286.720: 1.9882% ( 28) 00:11:18.169 286.720 - 288.501: 2.0229% ( 36) 00:11:18.169 288.501 - 290.282: 2.0654% ( 44) 00:11:18.169 290.282 - 292.063: 2.1020% ( 38) 00:11:18.169 292.063 - 293.843: 2.1290% ( 28) 00:11:18.169 293.843 - 295.624: 2.1580% ( 30) 00:11:18.169 295.624 - 297.405: 2.1908% ( 34) 00:11:18.169 297.405 - 299.186: 2.2274% ( 38) 00:11:18.169 299.186 - 300.967: 2.2786% ( 53) 00:11:18.169 300.967 - 302.748: 2.3114% ( 34) 00:11:18.169 302.748 - 304.529: 2.3393% ( 29) 00:11:18.169 304.529 - 306.310: 2.3885% ( 51) 00:11:18.169 306.310 - 308.090: 2.4242% ( 37) 00:11:18.169 308.090 - 309.871: 2.4618% ( 39) 00:11:18.169 309.871 - 311.652: 2.4956% ( 35) 00:11:18.169 311.652 - 313.433: 2.5342% ( 40) 00:11:18.169 313.433 - 315.214: 2.5641% ( 31) 00:11:18.169 315.214 - 316.995: 2.6152% ( 53) 00:11:18.169 316.995 - 318.776: 2.6432% ( 29) 00:11:18.169 318.776 - 320.557: 2.6847% ( 43) 00:11:18.169 320.557 - 322.337: 2.7194% ( 36) 00:11:18.169 322.337 - 324.118: 2.7715% ( 54) 00:11:18.169 324.118 - 325.899: 2.8130% ( 43) 00:11:18.169 325.899 - 327.680: 2.8506% ( 39) 00:11:18.169 327.680 - 329.461: 2.8940% ( 45) 00:11:18.169 329.461 - 331.242: 2.9326% ( 40) 00:11:18.169 331.242 - 333.023: 2.9741% ( 43) 00:11:18.169 333.023 - 334.803: 3.0185% ( 46) 00:11:18.169 334.803 - 336.584: 3.0599% ( 43) 00:11:18.169 336.584 - 338.365: 3.1101% ( 52) 00:11:18.169 338.365 - 340.146: 3.1709% ( 63) 00:11:18.169 340.146 - 341.927: 3.2124% ( 43) 00:11:18.169 341.927 - 343.708: 3.2606% ( 50) 00:11:18.169 343.708 - 345.489: 3.3165% ( 58) 00:11:18.169 345.489 - 347.270: 3.3532% ( 38) 00:11:18.169 347.270 - 349.050: 3.4005% ( 49) 00:11:18.169 349.050 - 350.831: 3.4487% ( 50) 00:11:18.169 350.831 - 352.612: 3.5047% ( 58) 00:11:18.169 352.612 - 354.393: 3.5471% ( 44) 00:11:18.169 354.393 - 356.174: 3.5934% ( 48) 00:11:18.169 356.174 - 357.955: 3.6436% ( 52) 00:11:18.169 357.955 - 359.736: 3.6899% ( 48) 00:11:18.169 359.736 - 361.517: 3.7449% ( 57) 00:11:18.169 361.517 - 363.297: 3.8047% ( 62) 00:11:18.169 363.297 - 365.078: 3.8519% ( 49) 00:11:18.169 365.078 - 366.859: 3.8963% ( 46) 00:11:18.169 366.859 - 368.640: 3.9455% ( 51) 00:11:18.169 368.640 - 370.421: 3.9957% ( 52) 00:11:18.169 370.421 - 372.202: 4.0449% ( 51) 00:11:18.169 372.202 - 373.983: 4.0999% ( 57) 00:11:18.169 373.983 - 375.763: 4.1462% ( 48) 00:11:18.169 375.763 - 377.544: 4.1983% ( 54) 00:11:18.169 377.544 - 379.325: 4.2455% ( 49) 00:11:18.169 379.325 - 381.106: 4.2938% ( 50) 00:11:18.169 381.106 - 382.887: 4.3603% ( 69) 00:11:18.169 382.887 - 384.668: 4.4182% ( 60) 00:11:18.169 384.668 - 386.449: 4.4645% ( 48) 00:11:18.169 386.449 - 388.230: 4.5118% ( 49) 00:11:18.169 388.230 - 390.010: 4.5590% ( 49) 00:11:18.169 390.010 - 391.791: 4.6015% ( 44) 00:11:18.169 391.791 - 393.572: 4.6517% ( 52) 00:11:18.169 393.572 - 395.353: 4.7028% ( 53) 00:11:18.169 395.353 - 397.134: 4.7684% ( 68) 00:11:18.169 397.134 - 398.915: 4.8234% ( 57) 00:11:18.169 398.915 - 400.696: 4.8745% ( 53) 00:11:18.169 400.696 - 402.477: 4.9150% ( 42) 00:11:18.169 402.477 - 404.257: 4.9555% ( 42) 00:11:18.169 404.257 - 406.038: 5.0124% ( 59) 00:11:18.169 406.038 - 407.819: 5.0636% ( 53) 00:11:18.169 407.819 - 409.600: 5.1195% ( 58) 00:11:18.169 409.600 - 411.381: 5.1938% ( 77) 00:11:18.169 411.381 - 413.162: 5.2517% ( 60) 00:11:18.169 413.162 - 414.943: 5.3076% ( 58) 00:11:18.169 414.943 - 416.723: 5.3829% ( 78) 00:11:18.169 416.723 - 418.504: 5.4475% ( 67) 00:11:18.169 418.504 - 420.285: 5.5179% ( 73) 00:11:18.169 420.285 - 422.066: 5.5816% ( 66) 00:11:18.169 422.066 - 423.847: 5.6376% ( 58) 00:11:18.169 423.847 - 425.628: 5.6954% ( 60) 00:11:18.169 425.628 - 427.409: 5.7591% ( 66) 00:11:18.169 427.409 - 429.190: 5.8102% ( 53) 00:11:18.169 429.190 - 430.970: 5.8662% ( 58) 00:11:18.169 430.970 - 432.751: 5.9347% ( 71) 00:11:18.169 432.751 - 434.532: 5.9974% ( 65) 00:11:18.169 434.532 - 436.313: 6.0524% ( 57) 00:11:18.169 436.313 - 438.094: 6.1045% ( 54) 00:11:18.169 438.094 - 439.875: 6.1710% ( 69) 00:11:18.169 439.875 - 441.656: 6.2299% ( 61) 00:11:18.169 441.656 - 443.437: 6.2820% ( 54) 00:11:18.169 443.437 - 445.217: 6.3389% ( 59) 00:11:18.169 445.217 - 446.998: 6.3958% ( 59) 00:11:18.169 446.998 - 448.779: 6.4556% ( 62) 00:11:18.169 448.779 - 450.560: 6.5144% ( 61) 00:11:18.169 450.560 - 452.341: 6.5685% ( 56) 00:11:18.169 452.341 - 454.122: 6.6292% ( 63) 00:11:18.170 454.122 - 455.903: 6.6900% ( 63) 00:11:18.170 455.903 - 459.464: 6.8125% ( 127) 00:11:18.170 459.464 - 463.026: 6.9553% ( 148) 00:11:18.170 463.026 - 466.588: 7.0913% ( 141) 00:11:18.170 466.588 - 470.150: 7.2389% ( 153) 00:11:18.170 470.150 - 473.711: 7.3923% ( 159) 00:11:18.170 473.711 - 477.273: 7.5254% ( 138) 00:11:18.170 477.273 - 480.835: 7.6576% ( 137) 00:11:18.170 480.835 - 484.397: 7.8119% ( 160) 00:11:18.170 484.397 - 487.958: 7.9547% ( 148) 00:11:18.170 487.958 - 491.520: 8.0946% ( 145) 00:11:18.170 491.520 - 495.082: 8.2181% ( 128) 00:11:18.170 495.082 - 498.643: 8.3570% ( 144) 00:11:18.170 498.643 - 502.205: 8.4766% ( 124) 00:11:18.170 502.205 - 505.767: 8.6068% ( 135) 00:11:18.170 505.767 - 509.329: 8.7621% ( 161) 00:11:18.170 509.329 - 512.890: 8.8856% ( 128) 00:11:18.170 512.890 - 516.452: 9.0390% ( 159) 00:11:18.170 516.452 - 520.014: 9.1895% ( 156) 00:11:18.170 520.014 - 523.576: 9.3323% ( 148) 00:11:18.170 523.576 - 527.137: 9.4972% ( 171) 00:11:18.170 527.137 - 530.699: 9.6602% ( 169) 00:11:18.170 530.699 - 534.261: 9.8204% ( 166) 00:11:18.170 534.261 - 537.823: 9.9824% ( 168) 00:11:18.170 537.823 - 541.384: 10.1387% ( 162) 00:11:18.170 541.384 - 544.946: 10.2796% ( 146) 00:11:18.170 544.946 - 548.508: 10.4291% ( 155) 00:11:18.170 548.508 - 552.070: 10.5969% ( 174) 00:11:18.170 552.070 - 555.631: 10.7465% ( 155) 00:11:18.170 555.631 - 559.193: 10.8883% ( 147) 00:11:18.170 559.193 - 562.755: 11.0397% ( 157) 00:11:18.170 562.755 - 566.317: 11.1979% ( 164) 00:11:18.170 566.317 - 569.878: 11.3658% ( 174) 00:11:18.170 569.878 - 573.440: 11.5134% ( 153) 00:11:18.170 573.440 - 577.002: 11.6590% ( 151) 00:11:18.170 577.002 - 580.563: 11.8346% ( 182) 00:11:18.170 580.563 - 584.125: 12.0015% ( 173) 00:11:18.170 584.125 - 587.687: 12.1732% ( 178) 00:11:18.170 587.687 - 591.249: 12.3575% ( 191) 00:11:18.170 591.249 - 594.810: 12.5234% ( 172) 00:11:18.170 594.810 - 598.372: 12.6720% ( 154) 00:11:18.170 598.372 - 601.934: 12.8302% ( 164) 00:11:18.170 601.934 - 605.496: 13.0038% ( 180) 00:11:18.170 605.496 - 609.057: 13.1765% ( 179) 00:11:18.170 609.057 - 612.619: 13.3578% ( 188) 00:11:18.170 612.619 - 616.181: 13.5373% ( 186) 00:11:18.170 616.181 - 619.743: 13.7234% ( 193) 00:11:18.170 619.743 - 623.304: 13.9000% ( 183) 00:11:18.170 623.304 - 626.866: 14.0910% ( 198) 00:11:18.170 626.866 - 630.428: 14.2839% ( 200) 00:11:18.170 630.428 - 633.990: 14.4836% ( 207) 00:11:18.170 633.990 - 637.551: 14.6515% ( 174) 00:11:18.170 637.551 - 641.113: 14.8627% ( 219) 00:11:18.170 641.113 - 644.675: 15.0325% ( 176) 00:11:18.170 644.675 - 648.237: 15.2119% ( 186) 00:11:18.170 648.237 - 651.798: 15.3894% ( 184) 00:11:18.170 651.798 - 655.360: 15.5766% ( 194) 00:11:18.170 655.360 - 658.922: 15.7618% ( 192) 00:11:18.170 658.922 - 662.483: 15.9383% ( 183) 00:11:18.170 662.483 - 666.045: 16.1496% ( 219) 00:11:18.170 666.045 - 669.607: 16.3676% ( 226) 00:11:18.170 669.607 - 673.169: 16.5731% ( 213) 00:11:18.170 673.169 - 676.730: 16.7930% ( 228) 00:11:18.170 676.730 - 680.292: 17.0004% ( 215) 00:11:18.170 680.292 - 683.854: 17.2069% ( 214) 00:11:18.170 683.854 - 687.416: 17.4181% ( 219) 00:11:18.170 687.416 - 690.977: 17.6140% ( 203) 00:11:18.170 690.977 - 694.539: 17.8281% ( 222) 00:11:18.170 694.539 - 698.101: 18.0461% ( 226) 00:11:18.170 698.101 - 701.663: 18.2854% ( 248) 00:11:18.170 701.663 - 705.224: 18.4860% ( 208) 00:11:18.170 705.224 - 708.786: 18.6848% ( 206) 00:11:18.170 708.786 - 712.348: 18.8893% ( 212) 00:11:18.170 712.348 - 715.910: 19.1111% ( 230) 00:11:18.170 715.910 - 719.471: 19.3523% ( 250) 00:11:18.170 719.471 - 723.033: 19.5713% ( 227) 00:11:18.170 723.033 - 726.595: 19.7893% ( 226) 00:11:18.170 726.595 - 730.157: 19.9909% ( 209) 00:11:18.170 730.157 - 733.718: 20.1954% ( 212) 00:11:18.170 733.718 - 737.280: 20.4164% ( 229) 00:11:18.170 737.280 - 740.842: 20.6556% ( 248) 00:11:18.170 740.842 - 744.403: 20.8630% ( 215) 00:11:18.170 744.403 - 747.965: 21.1109% ( 257) 00:11:18.170 747.965 - 751.527: 21.3357% ( 233) 00:11:18.170 751.527 - 755.089: 21.5460% ( 218) 00:11:18.170 755.089 - 758.650: 21.7833% ( 246) 00:11:18.170 758.650 - 762.212: 21.9762% ( 200) 00:11:18.170 762.212 - 765.774: 22.1730% ( 204) 00:11:18.170 765.774 - 769.336: 22.4209% ( 257) 00:11:18.170 769.336 - 772.897: 22.6322% ( 219) 00:11:18.170 772.897 - 776.459: 22.8531% ( 229) 00:11:18.170 776.459 - 780.021: 23.0875% ( 243) 00:11:18.170 780.021 - 783.583: 23.2949% ( 215) 00:11:18.170 783.583 - 787.144: 23.5255% ( 239) 00:11:18.170 787.144 - 790.706: 23.7474% ( 230) 00:11:18.170 790.706 - 794.268: 23.9741% ( 235) 00:11:18.170 794.268 - 797.830: 24.2056% ( 240) 00:11:18.170 797.830 - 801.391: 24.4603% ( 264) 00:11:18.170 801.391 - 804.953: 24.6821% ( 230) 00:11:18.170 804.953 - 808.515: 24.9378% ( 265) 00:11:18.170 808.515 - 812.077: 25.1577% ( 228) 00:11:18.170 812.077 - 815.638: 25.3815% ( 232) 00:11:18.170 815.638 - 819.200: 25.5851% ( 211) 00:11:18.170 819.200 - 822.762: 25.8021% ( 225) 00:11:18.170 822.762 - 826.323: 26.0124% ( 218) 00:11:18.170 826.323 - 829.885: 26.2275% ( 223) 00:11:18.170 829.885 - 833.447: 26.4629% ( 244) 00:11:18.170 833.447 - 837.009: 26.7137% ( 260) 00:11:18.170 837.009 - 840.570: 26.9453% ( 240) 00:11:18.170 840.570 - 844.132: 27.1893% ( 253) 00:11:18.170 844.132 - 847.694: 27.4257% ( 245) 00:11:18.170 847.694 - 851.256: 27.6263% ( 208) 00:11:18.170 851.256 - 854.817: 27.8636% ( 246) 00:11:18.170 854.817 - 858.379: 28.1444% ( 291) 00:11:18.170 858.379 - 861.941: 28.3633% ( 227) 00:11:18.170 861.941 - 865.503: 28.6006% ( 246) 00:11:18.170 865.503 - 869.064: 28.8061% ( 213) 00:11:18.170 869.064 - 872.626: 29.0357% ( 238) 00:11:18.170 872.626 - 876.188: 29.2788% ( 252) 00:11:18.170 876.188 - 879.750: 29.5200% ( 250) 00:11:18.170 879.750 - 883.311: 29.7226% ( 210) 00:11:18.170 883.311 - 886.873: 29.9290% ( 214) 00:11:18.170 886.873 - 890.435: 30.1547% ( 234) 00:11:18.170 890.435 - 893.997: 30.3747% ( 228) 00:11:18.170 893.997 - 897.558: 30.5879% ( 221) 00:11:18.170 897.558 - 901.120: 30.8223% ( 243) 00:11:18.170 901.120 - 904.682: 31.0287% ( 214) 00:11:18.170 904.682 - 908.243: 31.2525% ( 232) 00:11:18.170 908.243 - 911.805: 31.4657% ( 221) 00:11:18.170 911.805 - 918.929: 31.8979% ( 448) 00:11:18.170 918.929 - 926.052: 32.3474% ( 466) 00:11:18.170 926.052 - 933.176: 32.8008% ( 470) 00:11:18.170 933.176 - 940.299: 33.2475% ( 463) 00:11:18.170 940.299 - 947.423: 33.6469% ( 414) 00:11:18.170 947.423 - 954.546: 34.1215% ( 492) 00:11:18.170 954.546 - 961.670: 34.5855% ( 481) 00:11:18.170 961.670 - 968.793: 34.9906% ( 420) 00:11:18.170 968.793 - 975.917: 35.4575% ( 484) 00:11:18.171 975.917 - 983.040: 35.8733% ( 431) 00:11:18.171 983.040 - 990.163: 36.3036% ( 446) 00:11:18.171 990.163 - 997.287: 36.7107% ( 422) 00:11:18.171 997.287 - 1004.410: 37.1399% ( 445) 00:11:18.171 1004.410 - 1011.534: 37.6039% ( 481) 00:11:18.171 1011.534 - 1018.657: 38.0554% ( 468) 00:11:18.171 1018.657 - 1025.781: 38.4982% ( 459) 00:11:18.171 1025.781 - 1032.904: 38.8927% ( 409) 00:11:18.171 1032.904 - 1040.028: 39.3162% ( 439) 00:11:18.171 1040.028 - 1047.151: 39.6838% ( 381) 00:11:18.171 1047.151 - 1054.275: 40.0832% ( 414) 00:11:18.171 1054.275 - 1061.398: 40.5230% ( 456) 00:11:18.171 1061.398 - 1068.522: 40.9359% ( 428) 00:11:18.171 1068.522 - 1075.645: 41.3218% ( 400) 00:11:18.171 1075.645 - 1082.769: 41.7057% ( 398) 00:11:18.171 1082.769 - 1089.892: 42.1292% ( 439) 00:11:18.171 1089.892 - 1097.016: 42.5479% ( 434) 00:11:18.171 1097.016 - 1104.139: 42.9309% ( 397) 00:11:18.171 1104.139 - 1111.263: 43.3457% ( 430) 00:11:18.171 1111.263 - 1118.386: 43.7586% ( 428) 00:11:18.171 1118.386 - 1125.510: 44.1512% ( 407) 00:11:18.171 1125.510 - 1132.633: 44.5380% ( 401) 00:11:18.171 1132.633 - 1139.757: 44.9480% ( 425) 00:11:18.171 1139.757 - 1146.880: 45.3869% ( 455) 00:11:18.171 1146.880 - 1154.003: 45.8056% ( 434) 00:11:18.171 1154.003 - 1161.127: 46.2175% ( 427) 00:11:18.171 1161.127 - 1168.250: 46.6478% ( 446) 00:11:18.171 1168.250 - 1175.374: 47.0819% ( 450) 00:11:18.171 1175.374 - 1182.497: 47.5160% ( 450) 00:11:18.171 1182.497 - 1189.621: 47.9260% ( 425) 00:11:18.171 1189.621 - 1196.744: 48.3408% ( 430) 00:11:18.171 1196.744 - 1203.868: 48.7855% ( 461) 00:11:18.171 1203.868 - 1210.991: 49.1926% ( 422) 00:11:18.171 1210.991 - 1218.115: 49.6170% ( 440) 00:11:18.171 1218.115 - 1225.238: 50.0588% ( 458) 00:11:18.171 1225.238 - 1232.362: 50.4380% ( 393) 00:11:18.171 1232.362 - 1239.485: 50.8431% ( 420) 00:11:18.171 1239.485 - 1246.609: 51.2801% ( 453) 00:11:18.171 1246.609 - 1253.732: 51.7306% ( 467) 00:11:18.171 1253.732 - 1260.856: 52.1300% ( 414) 00:11:18.171 1260.856 - 1267.979: 52.5699% ( 456) 00:11:18.171 1267.979 - 1275.103: 52.9808% ( 426) 00:11:18.171 1275.103 - 1282.226: 53.3918% ( 426) 00:11:18.171 1282.226 - 1289.350: 53.8162% ( 440) 00:11:18.171 1289.350 - 1296.473: 54.2012% ( 399) 00:11:18.171 1296.473 - 1303.597: 54.6160% ( 430) 00:11:18.171 1303.597 - 1310.720: 55.0259% ( 425) 00:11:18.171 1310.720 - 1317.843: 55.4649% ( 455) 00:11:18.171 1317.843 - 1324.967: 55.8942% ( 445) 00:11:18.171 1324.967 - 1332.090: 56.3331% ( 455) 00:11:18.171 1332.090 - 1339.214: 56.7006% ( 381) 00:11:18.171 1339.214 - 1346.337: 57.1482% ( 464) 00:11:18.171 1346.337 - 1353.461: 57.5929% ( 461) 00:11:18.171 1353.461 - 1360.584: 58.0145% ( 437) 00:11:18.171 1360.584 - 1367.708: 58.4062% ( 406) 00:11:18.171 1367.708 - 1374.831: 58.8441% ( 454) 00:11:18.171 1374.831 - 1381.955: 59.2966% ( 469) 00:11:18.171 1381.955 - 1389.078: 59.7519% ( 472) 00:11:18.171 1389.078 - 1396.202: 60.1744% ( 438) 00:11:18.171 1396.202 - 1403.325: 60.6008% ( 442) 00:11:18.171 1403.325 - 1410.449: 61.0561% ( 472) 00:11:18.171 1410.449 - 1417.572: 61.4999% ( 460) 00:11:18.171 1417.572 - 1424.696: 61.9427% ( 459) 00:11:18.171 1424.696 - 1431.819: 62.3797% ( 453) 00:11:18.171 1431.819 - 1438.943: 62.8620% ( 500) 00:11:18.171 1438.943 - 1446.066: 63.2729% ( 426) 00:11:18.171 1446.066 - 1453.190: 63.7273% ( 471) 00:11:18.171 1453.190 - 1460.313: 64.1479% ( 436) 00:11:18.171 1460.313 - 1467.437: 64.5675% ( 435) 00:11:18.171 1467.437 - 1474.560: 65.0364% ( 486) 00:11:18.171 1474.560 - 1481.683: 65.4406% ( 419) 00:11:18.171 1481.683 - 1488.807: 65.8660% ( 441) 00:11:18.171 1488.807 - 1495.930: 66.3107% ( 461) 00:11:18.171 1495.930 - 1503.054: 66.7139% ( 418) 00:11:18.171 1503.054 - 1510.177: 67.1702% ( 473) 00:11:18.171 1510.177 - 1517.301: 67.6043% ( 450) 00:11:18.171 1517.301 - 1524.424: 68.0298% ( 441) 00:11:18.171 1524.424 - 1531.548: 68.4253% ( 410) 00:11:18.171 1531.548 - 1538.671: 68.8845% ( 476) 00:11:18.171 1538.671 - 1545.795: 69.2858% ( 416) 00:11:18.171 1545.795 - 1552.918: 69.7035% ( 433) 00:11:18.171 1552.918 - 1560.042: 70.1327% ( 445) 00:11:18.171 1560.042 - 1567.165: 70.5591% ( 442) 00:11:18.171 1567.165 - 1574.289: 70.9913% ( 448) 00:11:18.171 1574.289 - 1581.412: 71.3849% ( 408) 00:11:18.171 1581.412 - 1588.536: 71.8200% ( 451) 00:11:18.171 1588.536 - 1595.659: 72.2627% ( 459) 00:11:18.171 1595.659 - 1602.783: 72.6708% ( 423) 00:11:18.171 1602.783 - 1609.906: 73.1281% ( 474) 00:11:18.171 1609.906 - 1617.030: 73.5544% ( 442) 00:11:18.171 1617.030 - 1624.153: 73.9750% ( 436) 00:11:18.171 1624.153 - 1631.277: 74.3542% ( 393) 00:11:18.171 1631.277 - 1638.400: 74.7603% ( 421) 00:11:18.171 1638.400 - 1645.523: 75.1944% ( 450) 00:11:18.171 1645.523 - 1652.647: 75.5947% ( 415) 00:11:18.171 1652.647 - 1659.770: 75.9999% ( 420) 00:11:18.171 1659.770 - 1666.894: 76.4292% ( 445) 00:11:18.171 1666.894 - 1674.017: 76.8748% ( 462) 00:11:18.171 1674.017 - 1681.141: 77.2559% ( 395) 00:11:18.171 1681.141 - 1688.264: 77.6582% ( 417) 00:11:18.171 1688.264 - 1695.388: 78.0633% ( 420) 00:11:18.171 1695.388 - 1702.511: 78.4502% ( 401) 00:11:18.171 1702.511 - 1709.635: 78.8486% ( 413) 00:11:18.171 1709.635 - 1716.758: 79.2277% ( 393) 00:11:18.171 1716.758 - 1723.882: 79.6753% ( 464) 00:11:18.171 1723.882 - 1731.005: 80.0110% ( 348) 00:11:18.171 1731.005 - 1738.129: 80.4306% ( 435) 00:11:18.171 1738.129 - 1745.252: 80.8435% ( 428) 00:11:18.171 1745.252 - 1752.376: 81.2168% ( 387) 00:11:18.171 1752.376 - 1759.499: 81.5998% ( 397) 00:11:18.171 1759.499 - 1766.623: 81.9818% ( 396) 00:11:18.171 1766.623 - 1773.746: 82.3590% ( 391) 00:11:18.171 1773.746 - 1780.870: 82.7516% ( 407) 00:11:18.171 1780.870 - 1787.993: 83.1172% ( 379) 00:11:18.171 1787.993 - 1795.117: 83.5012% ( 398) 00:11:18.171 1795.117 - 1802.240: 83.8745% ( 387) 00:11:18.171 1802.240 - 1809.363: 84.2401% ( 379) 00:11:18.171 1809.363 - 1816.487: 84.6077% ( 381) 00:11:18.171 1816.487 - 1823.610: 84.9549% ( 360) 00:11:18.171 1823.610 - 1837.857: 85.6389% ( 709) 00:11:18.171 1837.857 - 1852.104: 86.3190% ( 705) 00:11:18.171 1852.104 - 1866.351: 86.9740% ( 679) 00:11:18.171 1866.351 - 1880.598: 87.6001% ( 649) 00:11:18.171 1880.598 - 1894.845: 88.2155% ( 638) 00:11:18.171 1894.845 - 1909.092: 88.7857% ( 591) 00:11:18.171 1909.092 - 1923.339: 89.3674% ( 603) 00:11:18.171 1923.339 - 1937.586: 89.9684% ( 623) 00:11:18.171 1937.586 - 1951.833: 90.4835% ( 534) 00:11:18.171 1951.833 - 1966.080: 91.0006% ( 536) 00:11:18.171 1966.080 - 1980.327: 91.5003% ( 518) 00:11:18.171 1980.327 - 1994.574: 91.9681% ( 485) 00:11:18.171 1994.574 - 2008.821: 92.4707% ( 521) 00:11:18.171 2008.821 - 2023.068: 92.9019% ( 447) 00:11:18.171 2023.068 - 2037.315: 93.3148% ( 428) 00:11:18.171 2037.315 - 2051.562: 93.7537% ( 455) 00:11:18.171 2051.562 - 2065.809: 94.1657% ( 427) 00:11:18.171 2065.809 - 2080.056: 94.5371% ( 385) 00:11:18.171 2080.056 - 2094.303: 94.8843% ( 360) 00:11:18.171 2094.303 - 2108.550: 95.2451% ( 374) 00:11:18.171 2108.550 - 2122.797: 95.5480% ( 314) 00:11:18.171 2122.797 - 2137.043: 95.8567% ( 320) 00:11:18.171 2137.043 - 2151.290: 96.1558% ( 310) 00:11:18.171 2151.290 - 2165.537: 96.4162% ( 270) 00:11:18.171 2165.537 - 2179.784: 96.6661% ( 259) 00:11:18.171 2179.784 - 2194.031: 96.9102% ( 253) 00:11:18.171 2194.031 - 2208.278: 97.1262% ( 224) 00:11:18.171 2208.278 - 2222.525: 97.3452% ( 227) 00:11:18.171 2222.525 - 2236.772: 97.5488% ( 211) 00:11:18.171 2236.772 - 2251.019: 97.7398% ( 198) 00:11:18.171 2251.019 - 2265.266: 97.9182% ( 185) 00:11:18.171 2265.266 - 2279.513: 98.0871% ( 175) 00:11:18.171 2279.513 - 2293.760: 98.2250% ( 143) 00:11:18.171 2293.760 - 2308.007: 98.3379% ( 117) 00:11:18.171 2308.007 - 2322.254: 98.4681% ( 135) 00:11:18.171 2322.254 - 2336.501: 98.5829% ( 119) 00:11:18.171 2336.501 - 2350.748: 98.6871% ( 108) 00:11:18.171 2350.748 - 2364.995: 98.7681% ( 84) 00:11:18.171 2364.995 - 2379.242: 98.8511% ( 86) 00:11:18.171 2379.242 - 2393.489: 98.9427% ( 95) 00:11:18.171 2393.489 - 2407.736: 99.0238% ( 84) 00:11:18.171 2407.736 - 2421.983: 99.0990% ( 78) 00:11:18.172 2421.983 - 2436.230: 99.1685% ( 72) 00:11:18.172 2436.230 - 2450.477: 99.2476% ( 82) 00:11:18.172 2450.477 - 2464.723: 99.2987% ( 53) 00:11:18.172 2464.723 - 2478.970: 99.3662% ( 70) 00:11:18.172 2478.970 - 2493.217: 99.4202% ( 56) 00:11:18.172 2493.217 - 2507.464: 99.4800% ( 62) 00:11:18.172 2507.464 - 2521.711: 99.5263% ( 48) 00:11:18.172 2521.711 - 2535.958: 99.5707% ( 46) 00:11:18.172 2535.958 - 2550.205: 99.6161% ( 47) 00:11:18.172 2550.205 - 2564.452: 99.6498% ( 35) 00:11:18.172 2564.452 - 2578.699: 99.6865% ( 38) 00:11:18.172 2578.699 - 2592.946: 99.7164% ( 31) 00:11:18.172 2592.946 - 2607.193: 99.7501% ( 35) 00:11:18.172 2607.193 - 2621.440: 99.7820% ( 33) 00:11:18.172 2621.440 - 2635.687: 99.8080% ( 27) 00:11:18.172 2635.687 - 2649.934: 99.8283% ( 21) 00:11:18.172 2649.934 - 2664.181: 99.8428% ( 15) 00:11:18.172 2664.181 - 2678.428: 99.8621% ( 20) 00:11:18.172 2678.428 - 2692.675: 99.8804% ( 19) 00:11:18.172 2692.675 - 2706.922: 99.8958% ( 16) 00:11:18.172 2706.922 - 2721.169: 99.9084% ( 13) 00:11:18.172 2721.169 - 2735.416: 99.9209% ( 13) 00:11:18.172 2735.416 - 2749.663: 99.9286% ( 8) 00:11:18.172 2749.663 - 2763.910: 99.9402% ( 12) 00:11:18.172 2763.910 - 2778.157: 99.9518% ( 12) 00:11:18.172 2778.157 - 2792.403: 99.9595% ( 8) 00:11:18.172 2792.403 - 2806.650: 99.9682% ( 9) 00:11:18.172 2806.650 - 2820.897: 99.9720% ( 4) 00:11:18.172 2820.897 - 2835.144: 99.9759% ( 4) 00:11:18.172 2835.144 - 2849.391: 99.9807% ( 5) 00:11:18.172 2849.391 - 2863.638: 99.9817% ( 1) 00:11:18.172 2863.638 - 2877.885: 99.9846% ( 3) 00:11:18.172 2877.885 - 2892.132: 99.9865% ( 2) 00:11:18.172 2892.132 - 2906.379: 99.9904% ( 4) 00:11:18.172 2920.626 - 2934.873: 99.9923% ( 2) 00:11:18.172 2934.873 - 2949.120: 99.9932% ( 1) 00:11:18.172 2963.367 - 2977.614: 99.9961% ( 3) 00:11:18.172 3006.108 - 3020.355: 99.9971% ( 1) 00:11:18.172 3034.602 - 3048.849: 99.9981% ( 1) 00:11:18.172 3063.096 - 3077.343: 99.9990% ( 1) 00:11:18.172 3105.837 - 3120.083: 100.0000% ( 1) 00:11:18.172 00:11:18.172 20:06:15 -- nvme/nvme.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:19.552 Initializing NVMe Controllers 00:11:19.552 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:19.552 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:11:19.552 Initialization complete. Launching workers. 00:11:19.552 ======================================================== 00:11:19.552 Latency(us) 00:11:19.552 Device Information : IOPS MiB/s Average min max 00:11:19.552 PCIE (0000:5e:00.0) NSID 1 from core 0: 128134.54 1501.58 998.09 528.79 1891.81 00:11:19.552 ======================================================== 00:11:19.552 Total : 128134.54 1501.58 998.09 528.79 1891.81 00:11:19.552 00:11:19.552 Summary latency data for PCIE (0000:5e:00.0) NSID 1 from core 0: 00:11:19.552 ================================================================================= 00:11:19.552 1.00000% : 926.052us 00:11:19.552 10.00000% : 954.546us 00:11:19.552 25.00000% : 975.917us 00:11:19.552 50.00000% : 1004.410us 00:11:19.552 75.00000% : 1025.781us 00:11:19.552 90.00000% : 1047.151us 00:11:19.552 95.00000% : 1054.275us 00:11:19.552 98.00000% : 1068.522us 00:11:19.552 99.00000% : 1075.645us 00:11:19.552 99.50000% : 1082.769us 00:11:19.552 99.90000% : 1389.078us 00:11:19.552 99.99000% : 1866.351us 00:11:19.552 99.99900% : 1894.845us 00:11:19.552 99.99990% : 1894.845us 00:11:19.552 99.99999% : 1894.845us 00:11:19.552 00:11:19.552 Latency histogram for PCIE (0000:5e:00.0) NSID 1 from core 0: 00:11:19.552 ============================================================================== 00:11:19.552 Range in us Cumulative IO count 00:11:19.552 527.137 - 530.699: 0.0023% ( 3) 00:11:19.552 530.699 - 534.261: 0.0055% ( 4) 00:11:19.552 577.002 - 580.563: 0.0125% ( 9) 00:11:19.552 580.563 - 584.125: 0.0140% ( 2) 00:11:19.552 616.181 - 619.743: 0.0195% ( 7) 00:11:19.552 619.743 - 623.304: 0.0226% ( 4) 00:11:19.552 655.360 - 658.922: 0.0242% ( 2) 00:11:19.552 658.922 - 662.483: 0.0304% ( 8) 00:11:19.552 694.539 - 698.101: 0.0367% ( 8) 00:11:19.552 698.101 - 701.663: 0.0390% ( 3) 00:11:19.552 744.403 - 747.965: 0.0437% ( 6) 00:11:19.552 747.965 - 751.527: 0.0476% ( 5) 00:11:19.552 776.459 - 780.021: 0.0491% ( 2) 00:11:19.552 780.021 - 783.583: 0.0577% ( 11) 00:11:19.552 783.583 - 787.144: 0.0585% ( 1) 00:11:19.552 787.144 - 790.706: 0.0601% ( 2) 00:11:19.552 790.706 - 794.268: 0.0608% ( 1) 00:11:19.552 794.268 - 797.830: 0.0624% ( 2) 00:11:19.552 797.830 - 801.391: 0.0632% ( 1) 00:11:19.552 801.391 - 804.953: 0.0647% ( 2) 00:11:19.552 804.953 - 808.515: 0.0655% ( 1) 00:11:19.552 808.515 - 812.077: 0.0679% ( 3) 00:11:19.552 812.077 - 815.638: 0.0718% ( 5) 00:11:19.552 819.200 - 822.762: 0.0749% ( 4) 00:11:19.552 826.323 - 829.885: 0.0788% ( 5) 00:11:19.552 829.885 - 833.447: 0.0866% ( 10) 00:11:19.552 833.447 - 837.009: 0.0897% ( 4) 00:11:19.552 840.570 - 844.132: 0.0921% ( 3) 00:11:19.552 847.694 - 851.256: 0.0936% ( 2) 00:11:19.552 851.256 - 854.817: 0.0944% ( 1) 00:11:19.552 854.817 - 858.379: 0.0952% ( 1) 00:11:19.552 869.064 - 872.626: 0.0991% ( 5) 00:11:19.552 872.626 - 876.188: 0.1038% ( 6) 00:11:19.552 886.873 - 890.435: 0.1053% ( 2) 00:11:19.552 890.435 - 893.997: 0.1084% ( 4) 00:11:19.552 893.997 - 897.558: 0.1155% ( 9) 00:11:19.552 901.120 - 904.682: 0.1194% ( 5) 00:11:19.552 904.682 - 908.243: 0.1264% ( 9) 00:11:19.552 908.243 - 911.805: 0.1927% ( 85) 00:11:19.552 911.805 - 918.929: 0.5149% ( 413) 00:11:19.552 918.929 - 926.052: 1.3808% ( 1110) 00:11:19.552 926.052 - 933.176: 2.9535% ( 2016) 00:11:19.552 933.176 - 940.299: 5.3936% ( 3128) 00:11:19.552 940.299 - 947.423: 8.6419% ( 4164) 00:11:19.552 947.423 - 954.546: 12.5081% ( 4956) 00:11:19.552 954.546 - 961.670: 16.8626% ( 5582) 00:11:19.552 961.670 - 968.793: 21.8279% ( 6365) 00:11:19.552 968.793 - 975.917: 27.5273% ( 7306) 00:11:19.552 975.917 - 983.040: 33.3664% ( 7485) 00:11:19.552 983.040 - 990.163: 40.0370% ( 8551) 00:11:19.552 990.163 - 997.287: 47.1015% ( 9056) 00:11:19.552 997.287 - 1004.410: 57.8006% ( 13715) 00:11:19.552 1004.410 - 1011.534: 65.9916% ( 10500) 00:11:19.552 1011.534 - 1018.657: 73.2465% ( 9300) 00:11:19.552 1018.657 - 1025.781: 79.2603% ( 7709) 00:11:19.552 1025.781 - 1032.904: 84.7054% ( 6980) 00:11:19.552 1032.904 - 1040.028: 89.0326% ( 5547) 00:11:19.552 1040.028 - 1047.151: 92.8754% ( 4926) 00:11:19.552 1047.151 - 1054.275: 95.7453% ( 3679) 00:11:19.552 1054.275 - 1061.398: 97.6878% ( 2490) 00:11:19.552 1061.398 - 1068.522: 98.8579% ( 1500) 00:11:19.552 1068.522 - 1075.645: 99.4602% ( 772) 00:11:19.552 1075.645 - 1082.769: 99.5717% ( 143) 00:11:19.552 1082.769 - 1089.892: 99.5819% ( 13) 00:11:19.552 1089.892 - 1097.016: 99.5873% ( 7) 00:11:19.552 1097.016 - 1104.139: 99.5936% ( 8) 00:11:19.552 1104.139 - 1111.263: 99.5982% ( 6) 00:11:19.552 1111.263 - 1118.386: 99.6021% ( 5) 00:11:19.552 1118.386 - 1125.510: 99.6076% ( 7) 00:11:19.552 1125.510 - 1132.633: 99.6146% ( 9) 00:11:19.552 1132.633 - 1139.757: 99.6217% ( 9) 00:11:19.552 1139.757 - 1146.880: 99.6302% ( 11) 00:11:19.552 1146.880 - 1154.003: 99.6458% ( 20) 00:11:19.552 1154.003 - 1161.127: 99.6622% ( 21) 00:11:19.552 1161.127 - 1168.250: 99.6786% ( 21) 00:11:19.552 1168.250 - 1175.374: 99.6903% ( 15) 00:11:19.552 1175.374 - 1182.497: 99.6981% ( 10) 00:11:19.552 1182.497 - 1189.621: 99.7067% ( 11) 00:11:19.552 1189.621 - 1196.744: 99.7137% ( 9) 00:11:19.552 1196.744 - 1203.868: 99.7215% ( 10) 00:11:19.552 1203.868 - 1210.991: 99.7285% ( 9) 00:11:19.552 1210.991 - 1218.115: 99.7371% ( 11) 00:11:19.552 1218.115 - 1225.238: 99.7504% ( 17) 00:11:19.552 1225.238 - 1232.362: 99.7574% ( 9) 00:11:19.552 1232.362 - 1239.485: 99.7644% ( 9) 00:11:19.552 1239.485 - 1246.609: 99.7738% ( 12) 00:11:19.552 1246.609 - 1253.732: 99.7847% ( 14) 00:11:19.552 1253.732 - 1260.856: 99.7964% ( 15) 00:11:19.552 1260.856 - 1267.979: 99.8042% ( 10) 00:11:19.552 1267.979 - 1275.103: 99.8136% ( 12) 00:11:19.552 1275.103 - 1282.226: 99.8284% ( 19) 00:11:19.552 1282.226 - 1289.350: 99.8401% ( 15) 00:11:19.552 1289.350 - 1296.473: 99.8479% ( 10) 00:11:19.552 1296.473 - 1303.597: 99.8588% ( 14) 00:11:19.552 1303.597 - 1310.720: 99.8643% ( 7) 00:11:19.552 1310.720 - 1317.843: 99.8736% ( 12) 00:11:19.552 1317.843 - 1324.967: 99.8783% ( 6) 00:11:19.552 1324.967 - 1332.090: 99.8861% ( 10) 00:11:19.552 1332.090 - 1339.214: 99.8900% ( 5) 00:11:19.552 1339.214 - 1346.337: 99.8923% ( 3) 00:11:19.552 1346.337 - 1353.461: 99.8947% ( 3) 00:11:19.552 1353.461 - 1360.584: 99.8955% ( 1) 00:11:19.552 1360.584 - 1367.708: 99.8970% ( 2) 00:11:19.552 1367.708 - 1374.831: 99.8978% ( 1) 00:11:19.552 1374.831 - 1381.955: 99.8986% ( 1) 00:11:19.552 1381.955 - 1389.078: 99.9001% ( 2) 00:11:19.552 1745.252 - 1752.376: 99.9009% ( 1) 00:11:19.552 1752.376 - 1759.499: 99.9033% ( 3) 00:11:19.552 1759.499 - 1766.623: 99.9087% ( 7) 00:11:19.552 1766.623 - 1773.746: 99.9126% ( 5) 00:11:19.552 1773.746 - 1780.870: 99.9173% ( 6) 00:11:19.552 1780.870 - 1787.993: 99.9251% ( 10) 00:11:19.552 1787.993 - 1795.117: 99.9329% ( 10) 00:11:19.552 1795.117 - 1802.240: 99.9384% ( 7) 00:11:19.552 1802.240 - 1809.363: 99.9446% ( 8) 00:11:19.552 1809.363 - 1816.487: 99.9548% ( 13) 00:11:19.552 1816.487 - 1823.610: 99.9602% ( 7) 00:11:19.552 1823.610 - 1837.857: 99.9766% ( 21) 00:11:19.552 1837.857 - 1852.104: 99.9899% ( 17) 00:11:19.552 1852.104 - 1866.351: 99.9945% ( 6) 00:11:19.552 1866.351 - 1880.598: 99.9984% ( 5) 00:11:19.552 1880.598 - 1894.845: 100.0000% ( 2) 00:11:19.552 00:11:19.552 20:06:17 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:19.552 00:11:19.552 real 0m2.710s 00:11:19.552 user 0m2.182s 00:11:19.552 sys 0m0.372s 00:11:19.552 20:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.552 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.552 ************************************ 00:11:19.552 END TEST nvme_perf 00:11:19.552 ************************************ 00:11:19.552 20:06:17 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:11:19.552 20:06:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:19.552 20:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.552 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.552 ************************************ 00:11:19.552 START TEST nvme_hello_world 00:11:19.552 ************************************ 00:11:19.552 20:06:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0 00:11:19.811 Initializing NVMe Controllers 00:11:19.811 Attached to 0000:5e:00.0 00:11:19.811 Namespace ID: 1 size: 4000GB 00:11:19.811 Initialization complete. 00:11:19.811 INFO: using host memory buffer for IO 00:11:19.811 Hello world! 00:11:19.811 00:11:19.811 real 0m0.334s 00:11:19.811 user 0m0.092s 00:11:19.811 sys 0m0.178s 00:11:19.811 20:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:19.811 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.811 ************************************ 00:11:19.811 END TEST nvme_hello_world 00:11:19.811 ************************************ 00:11:19.811 20:06:17 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:11:19.811 20:06:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:19.811 20:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:19.811 20:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:19.811 ************************************ 00:11:19.811 START TEST nvme_sgl 00:11:19.811 ************************************ 00:11:19.811 20:06:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl 00:11:20.380 NVMe Readv/Writev Request test 00:11:20.380 Attached to 0000:5e:00.0 00:11:20.380 0000:5e:00.0: build_io_request_0 test passed 00:11:20.380 0000:5e:00.0: build_io_request_1 test passed 00:11:20.380 0000:5e:00.0: build_io_request_2 test passed 00:11:20.380 0000:5e:00.0: build_io_request_3 test passed 00:11:20.380 0000:5e:00.0: build_io_request_4 test passed 00:11:20.380 0000:5e:00.0: build_io_request_5 test passed 00:11:20.380 0000:5e:00.0: build_io_request_6 test passed 00:11:20.380 0000:5e:00.0: build_io_request_7 test passed 00:11:20.380 0000:5e:00.0: build_io_request_8 test passed 00:11:20.380 0000:5e:00.0: build_io_request_9 test passed 00:11:20.380 0000:5e:00.0: build_io_request_10 test passed 00:11:20.380 0000:5e:00.0: build_io_request_11 test passed 00:11:20.380 Cleaning up... 00:11:20.380 00:11:20.380 real 0m0.403s 00:11:20.380 user 0m0.188s 00:11:20.380 sys 0m0.164s 00:11:20.380 20:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.380 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.380 ************************************ 00:11:20.380 END TEST nvme_sgl 00:11:20.380 ************************************ 00:11:20.380 20:06:18 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:11:20.380 20:06:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.380 20:06:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.380 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.380 ************************************ 00:11:20.380 START TEST nvme_e2edp 00:11:20.380 ************************************ 00:11:20.380 20:06:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp 00:11:20.640 NVMe Write/Read with End-to-End data protection test 00:11:20.640 Attached to 0000:5e:00.0 00:11:20.640 Cleaning up... 00:11:20.640 00:11:20.640 real 0m0.259s 00:11:20.640 user 0m0.077s 00:11:20.640 sys 0m0.146s 00:11:20.640 20:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.640 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 ************************************ 00:11:20.640 END TEST nvme_e2edp 00:11:20.640 ************************************ 00:11:20.640 20:06:18 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:11:20.640 20:06:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.640 20:06:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.640 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.640 ************************************ 00:11:20.640 START TEST nvme_reserve 00:11:20.640 ************************************ 00:11:20.640 20:06:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve 00:11:20.899 ===================================================== 00:11:20.899 NVMe Controller at PCI bus 94, device 0, function 0 00:11:20.899 ===================================================== 00:11:20.899 Reservations: Not Supported 00:11:20.899 Reservation test passed 00:11:20.899 00:11:20.899 real 0m0.311s 00:11:20.899 user 0m0.085s 00:11:20.899 sys 0m0.155s 00:11:20.899 20:06:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.899 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.899 ************************************ 00:11:20.899 END TEST nvme_reserve 00:11:20.899 ************************************ 00:11:20.899 20:06:18 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:11:20.899 20:06:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:20.899 20:06:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:20.899 20:06:18 -- common/autotest_common.sh@10 -- # set +x 00:11:20.899 ************************************ 00:11:20.899 START TEST nvme_err_injection 00:11:20.899 ************************************ 00:11:20.899 20:06:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection 00:11:21.468 NVMe Error Injection test 00:11:21.468 Attached to 0000:5e:00.0 00:11:21.468 0000:5e:00.0: get features failed as expected 00:11:21.468 0000:5e:00.0: get features successfully as expected 00:11:21.468 0000:5e:00.0: read failed as expected 00:11:21.468 0000:5e:00.0: read successfully as expected 00:11:21.468 Cleaning up... 00:11:21.468 00:11:21.468 real 0m0.341s 00:11:21.468 user 0m0.090s 00:11:21.468 sys 0m0.190s 00:11:21.468 20:06:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:21.468 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 ************************************ 00:11:21.468 END TEST nvme_err_injection 00:11:21.468 ************************************ 00:11:21.468 20:06:19 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:21.468 20:06:19 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:11:21.468 20:06:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:21.468 20:06:19 -- common/autotest_common.sh@10 -- # set +x 00:11:21.468 ************************************ 00:11:21.468 START TEST nvme_overhead 00:11:21.468 ************************************ 00:11:21.468 20:06:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:22.849 Initializing NVMe Controllers 00:11:22.849 Attached to 0000:5e:00.0 00:11:22.849 Initialization complete. Launching workers. 00:11:22.849 submit (in ns) avg, min, max = 4717.5, 4396.5, 896602.6 00:11:22.849 complete (in ns) avg, min, max = 2769.4, 2704.3, 235173.9 00:11:22.849 00:11:22.849 Submit histogram 00:11:22.849 ================ 00:11:22.849 Range in us Cumulative Count 00:11:22.849 4.397 - 4.424: 0.1150% ( 101) 00:11:22.849 4.424 - 4.452: 0.7979% ( 600) 00:11:22.849 4.452 - 4.480: 3.1098% ( 2031) 00:11:22.849 4.480 - 4.508: 9.1688% ( 5323) 00:11:22.849 4.508 - 4.536: 18.1430% ( 7884) 00:11:22.849 4.536 - 4.563: 25.9277% ( 6839) 00:11:22.849 4.563 - 4.591: 32.9736% ( 6190) 00:11:22.849 4.591 - 4.619: 39.2820% ( 5542) 00:11:22.849 4.619 - 4.647: 46.1549% ( 6038) 00:11:22.849 4.647 - 4.675: 53.8451% ( 6756) 00:11:22.849 4.675 - 4.703: 61.1494% ( 6417) 00:11:22.849 4.703 - 4.730: 66.8989% ( 5051) 00:11:22.849 4.730 - 4.758: 72.4275% ( 4857) 00:11:22.849 4.758 - 4.786: 76.2498% ( 3358) 00:11:22.849 4.786 - 4.814: 79.2526% ( 2638) 00:11:22.849 4.814 - 4.842: 81.9651% ( 2383) 00:11:22.849 4.842 - 4.870: 84.8791% ( 2560) 00:11:22.849 4.870 - 4.897: 88.1551% ( 2878) 00:11:22.849 4.897 - 4.925: 91.2785% ( 2744) 00:11:22.849 4.925 - 4.953: 93.9307% ( 2330) 00:11:22.849 4.953 - 4.981: 95.9193% ( 1747) 00:11:22.849 4.981 - 5.009: 97.2180% ( 1141) 00:11:22.849 5.009 - 5.037: 98.1958% ( 859) 00:11:22.849 5.037 - 5.064: 98.8276% ( 555) 00:11:22.849 5.064 - 5.092: 99.1998% ( 327) 00:11:22.849 5.092 - 5.120: 99.3683% ( 148) 00:11:22.849 5.120 - 5.148: 99.4115% ( 38) 00:11:22.849 5.148 - 5.176: 99.4354% ( 21) 00:11:22.849 5.176 - 5.203: 99.4388% ( 3) 00:11:22.849 5.203 - 5.231: 99.4400% ( 1) 00:11:22.849 5.231 - 5.259: 99.4411% ( 1) 00:11:22.849 5.259 - 5.287: 99.4445% ( 3) 00:11:22.849 5.287 - 5.315: 99.4457% ( 1) 00:11:22.849 5.315 - 5.343: 99.4479% ( 2) 00:11:22.849 5.343 - 5.370: 99.4513% ( 3) 00:11:22.849 5.398 - 5.426: 99.4525% ( 1) 00:11:22.849 5.426 - 5.454: 99.4548% ( 2) 00:11:22.849 5.454 - 5.482: 99.4570% ( 2) 00:11:22.849 5.510 - 5.537: 99.4582% ( 1) 00:11:22.849 5.621 - 5.649: 99.4593% ( 1) 00:11:22.849 5.732 - 5.760: 99.4616% ( 2) 00:11:22.849 5.760 - 5.788: 99.4650% ( 3) 00:11:22.849 5.788 - 5.816: 99.4661% ( 1) 00:11:22.849 5.816 - 5.843: 99.4673% ( 1) 00:11:22.849 5.843 - 5.871: 99.4684% ( 1) 00:11:22.849 5.871 - 5.899: 99.4696% ( 1) 00:11:22.849 5.983 - 6.010: 99.4707% ( 1) 00:11:22.849 6.038 - 6.066: 99.4718% ( 1) 00:11:22.849 6.344 - 6.372: 99.4730% ( 1) 00:11:22.849 6.595 - 6.623: 99.4741% ( 1) 00:11:22.849 6.706 - 6.734: 99.4764% ( 2) 00:11:22.849 7.179 - 7.235: 99.4775% ( 1) 00:11:22.849 7.346 - 7.402: 99.4798% ( 2) 00:11:22.849 7.402 - 7.457: 99.4809% ( 1) 00:11:22.849 7.513 - 7.569: 99.4832% ( 2) 00:11:22.849 7.569 - 7.624: 99.4855% ( 2) 00:11:22.849 7.624 - 7.680: 99.4935% ( 7) 00:11:22.849 7.680 - 7.736: 99.5003% ( 6) 00:11:22.849 7.736 - 7.791: 99.5037% ( 3) 00:11:22.849 7.791 - 7.847: 99.5060% ( 2) 00:11:22.849 7.847 - 7.903: 99.5185% ( 11) 00:11:22.849 7.903 - 7.958: 99.5265% ( 7) 00:11:22.849 7.958 - 8.014: 99.5413% ( 13) 00:11:22.849 8.014 - 8.070: 99.5504% ( 8) 00:11:22.849 8.070 - 8.125: 99.5720% ( 19) 00:11:22.849 8.125 - 8.181: 99.5788% ( 6) 00:11:22.849 8.181 - 8.237: 99.5879% ( 8) 00:11:22.849 8.237 - 8.292: 99.6107% ( 20) 00:11:22.849 8.292 - 8.348: 99.6221% ( 10) 00:11:22.849 8.348 - 8.403: 99.6335% ( 10) 00:11:22.849 8.403 - 8.459: 99.6483% ( 13) 00:11:22.849 8.459 - 8.515: 99.6597% ( 10) 00:11:22.849 8.515 - 8.570: 99.6745% ( 13) 00:11:22.849 8.570 - 8.626: 99.6904% ( 14) 00:11:22.849 8.626 - 8.682: 99.7006% ( 9) 00:11:22.849 8.682 - 8.737: 99.7143% ( 12) 00:11:22.849 8.737 - 8.793: 99.7200% ( 5) 00:11:22.849 8.793 - 8.849: 99.7371% ( 15) 00:11:22.849 8.849 - 8.904: 99.7541% ( 15) 00:11:22.849 8.904 - 8.960: 99.7621% ( 7) 00:11:22.849 8.960 - 9.016: 99.7689% ( 6) 00:11:22.849 9.016 - 9.071: 99.7849% ( 14) 00:11:22.849 9.071 - 9.127: 99.7951% ( 9) 00:11:22.849 9.127 - 9.183: 99.8065% ( 10) 00:11:22.849 9.183 - 9.238: 99.8202% ( 12) 00:11:22.849 9.238 - 9.294: 99.8258% ( 5) 00:11:22.850 9.294 - 9.350: 99.8372% ( 10) 00:11:22.850 9.350 - 9.405: 99.8429% ( 5) 00:11:22.850 9.405 - 9.461: 99.8532% ( 9) 00:11:22.850 9.461 - 9.517: 99.8589% ( 5) 00:11:22.850 9.517 - 9.572: 99.8645% ( 5) 00:11:22.850 9.572 - 9.628: 99.8714% ( 6) 00:11:22.850 9.628 - 9.683: 99.8839% ( 11) 00:11:22.850 9.683 - 9.739: 99.8964% ( 11) 00:11:22.850 9.739 - 9.795: 99.9010% ( 4) 00:11:22.850 9.795 - 9.850: 99.9089% ( 7) 00:11:22.850 9.850 - 9.906: 99.9124% ( 3) 00:11:22.850 9.906 - 9.962: 99.9203% ( 7) 00:11:22.850 9.962 - 10.017: 99.9237% ( 3) 00:11:22.850 10.017 - 10.073: 99.9283% ( 4) 00:11:22.850 10.073 - 10.129: 99.9340% ( 5) 00:11:22.850 10.129 - 10.184: 99.9408% ( 6) 00:11:22.850 10.184 - 10.240: 99.9419% ( 1) 00:11:22.850 10.240 - 10.296: 99.9442% ( 2) 00:11:22.850 10.296 - 10.351: 99.9465% ( 2) 00:11:22.850 10.351 - 10.407: 99.9511% ( 4) 00:11:22.850 10.407 - 10.463: 99.9533% ( 2) 00:11:22.850 10.463 - 10.518: 99.9567% ( 3) 00:11:22.850 10.518 - 10.574: 99.9579% ( 1) 00:11:22.850 10.574 - 10.630: 99.9590% ( 1) 00:11:22.850 10.685 - 10.741: 99.9602% ( 1) 00:11:22.850 10.852 - 10.908: 99.9624% ( 2) 00:11:22.850 10.908 - 10.963: 99.9636% ( 1) 00:11:22.850 11.075 - 11.130: 99.9659% ( 2) 00:11:22.850 11.130 - 11.186: 99.9670% ( 1) 00:11:22.850 11.186 - 11.242: 99.9681% ( 1) 00:11:22.850 11.297 - 11.353: 99.9704% ( 2) 00:11:22.850 11.353 - 11.409: 99.9727% ( 2) 00:11:22.850 11.464 - 11.520: 99.9738% ( 1) 00:11:22.850 11.520 - 11.576: 99.9750% ( 1) 00:11:22.850 11.576 - 11.631: 99.9761% ( 1) 00:11:22.850 11.687 - 11.743: 99.9772% ( 1) 00:11:22.850 11.965 - 12.021: 99.9784% ( 1) 00:11:22.850 12.466 - 12.522: 99.9806% ( 2) 00:11:22.850 12.800 - 12.856: 99.9829% ( 2) 00:11:22.850 13.023 - 13.078: 99.9841% ( 1) 00:11:22.850 13.190 - 13.245: 99.9852% ( 1) 00:11:22.850 13.357 - 13.412: 99.9863% ( 1) 00:11:22.850 15.137 - 15.249: 99.9875% ( 1) 00:11:22.850 17.030 - 17.141: 99.9886% ( 1) 00:11:22.850 17.363 - 17.475: 99.9898% ( 1) 00:11:22.850 17.475 - 17.586: 99.9909% ( 1) 00:11:22.850 17.809 - 17.920: 99.9920% ( 1) 00:11:22.850 20.925 - 21.037: 99.9932% ( 1) 00:11:22.850 22.706 - 22.817: 99.9943% ( 1) 00:11:22.850 32.723 - 32.946: 99.9954% ( 1) 00:11:22.850 40.292 - 40.515: 99.9966% ( 1) 00:11:22.850 68.118 - 68.563: 99.9977% ( 1) 00:11:22.850 70.790 - 71.235: 99.9989% ( 1) 00:11:22.850 893.997 - 897.558: 100.0000% ( 1) 00:11:22.850 00:11:22.850 Complete histogram 00:11:22.850 ================== 00:11:22.850 Range in us Cumulative Count 00:11:22.850 2.699 - 2.713: 0.0467% ( 41) 00:11:22.850 2.713 - 2.727: 4.1331% ( 3590) 00:11:22.850 2.727 - 2.741: 37.7168% ( 29504) 00:11:22.850 2.741 - 2.755: 82.6697% ( 39492) 00:11:22.850 2.755 - 2.769: 95.8123% ( 11546) 00:11:22.850 2.769 - 2.783: 98.0615% ( 1976) 00:11:22.850 2.783 - 2.797: 99.0177% ( 840) 00:11:22.850 2.797 - 2.810: 99.2442% ( 199) 00:11:22.850 2.810 - 2.824: 99.2909% ( 41) 00:11:22.850 2.824 - 2.838: 99.3364% ( 40) 00:11:22.850 2.838 - 2.852: 99.3944% ( 51) 00:11:22.850 2.852 - 2.866: 99.4206% ( 23) 00:11:22.850 2.866 - 2.880: 99.4400% ( 17) 00:11:22.850 2.880 - 2.894: 99.4502% ( 9) 00:11:22.850 2.894 - 2.908: 99.4548% ( 4) 00:11:22.850 2.908 - 2.922: 99.4559% ( 1) 00:11:22.850 2.922 - 2.936: 99.4570% ( 1) 00:11:22.850 2.950 - 2.963: 99.4627% ( 5) 00:11:22.850 2.963 - 2.977: 99.4639% ( 1) 00:11:22.850 2.977 - 2.991: 99.4661% ( 2) 00:11:22.850 2.991 - 3.005: 99.4673% ( 1) 00:11:22.850 3.005 - 3.019: 99.4684% ( 1) 00:11:22.850 3.047 - 3.061: 99.4696% ( 1) 00:11:22.850 3.117 - 3.130: 99.4707% ( 1) 00:11:22.850 3.130 - 3.144: 99.4718% ( 1) 00:11:22.850 3.158 - 3.172: 99.4730% ( 1) 00:11:22.850 3.186 - 3.200: 99.4741% ( 1) 00:11:22.850 3.214 - 3.228: 99.4753% ( 1) 00:11:22.850 3.228 - 3.242: 99.4775% ( 2) 00:11:22.850 3.242 - 3.256: 99.4787% ( 1) 00:11:22.850 3.339 - 3.353: 99.4798% ( 1) 00:11:22.850 3.437 - 3.450: 99.4809% ( 1) 00:11:22.850 3.450 - 3.464: 99.4821% ( 1) 00:11:22.850 3.506 - 3.520: 99.4832% ( 1) 00:11:22.850 3.701 - 3.729: 99.4844% ( 1) 00:11:22.850 3.757 - 3.784: 99.4855% ( 1) 00:11:22.850 3.840 - 3.868: 99.4866% ( 1) 00:11:22.850 3.923 - 3.951: 99.4878% ( 1) 00:11:22.850 3.951 - 3.979: 99.4889% ( 1) 00:11:22.850 3.979 - 4.007: 99.4901% ( 1) 00:11:22.850 4.063 - 4.090: 99.4912% ( 1) 00:11:22.850 4.090 - 4.118: 99.4935% ( 2) 00:11:22.850 4.285 - 4.313: 99.4946% ( 1) 00:11:22.850 4.508 - 4.536: 99.4957% ( 1) 00:11:22.850 4.758 - 4.786: 99.4980% ( 2) 00:11:22.850 5.203 - 5.231: 99.4992% ( 1) 00:11:22.850 5.482 - 5.510: 99.5003% ( 1) 00:11:22.850 5.565 - 5.593: 99.5014% ( 1) 00:11:22.850 5.593 - 5.621: 99.5037% ( 2) 00:11:22.850 5.621 - 5.649: 99.5048% ( 1) 00:11:22.850 5.649 - 5.677: 99.5060% ( 1) 00:11:22.850 5.677 - 5.704: 99.5083% ( 2) 00:11:22.850 5.704 - 5.732: 99.5105% ( 2) 00:11:22.850 5.732 - 5.760: 99.5140% ( 3) 00:11:22.850 5.760 - 5.788: 99.5185% ( 4) 00:11:22.850 5.788 - 5.816: 99.5208% ( 2) 00:11:22.850 5.871 - 5.899: 99.5265% ( 5) 00:11:22.850 5.899 - 5.927: 99.5310% ( 4) 00:11:22.850 5.927 - 5.955: 99.5322% ( 1) 00:11:22.850 5.955 - 5.983: 99.5344% ( 2) 00:11:22.850 5.983 - 6.010: 99.5401% ( 5) 00:11:22.850 6.010 - 6.038: 99.5470% ( 6) 00:11:22.850 6.038 - 6.066: 99.5515% ( 4) 00:11:22.850 6.066 - 6.094: 99.5606% ( 8) 00:11:22.850 6.094 - 6.122: 99.5675% ( 6) 00:11:22.850 6.122 - 6.150: 99.5777% ( 9) 00:11:22.850 6.150 - 6.177: 99.5868% ( 8) 00:11:22.850 6.177 - 6.205: 99.5959% ( 8) 00:11:22.850 6.205 - 6.233: 99.5993% ( 3) 00:11:22.850 6.233 - 6.261: 99.6073% ( 7) 00:11:22.850 6.261 - 6.289: 99.6175% ( 9) 00:11:22.850 6.289 - 6.317: 99.6255% ( 7) 00:11:22.850 6.317 - 6.344: 99.6392% ( 12) 00:11:22.850 6.344 - 6.372: 99.6483% ( 8) 00:11:22.850 6.372 - 6.400: 99.6574% ( 8) 00:11:22.850 6.400 - 6.428: 99.6665% ( 8) 00:11:22.850 6.428 - 6.456: 99.6756% ( 8) 00:11:22.850 6.456 - 6.483: 99.6938% ( 16) 00:11:22.850 6.483 - 6.511: 99.6995% ( 5) 00:11:22.850 6.511 - 6.539: 99.7120% ( 11) 00:11:22.850 6.539 - 6.567: 99.7154% ( 3) 00:11:22.850 6.567 - 6.595: 99.7245% ( 8) 00:11:22.850 6.595 - 6.623: 99.7280% ( 3) 00:11:22.850 6.623 - 6.650: 99.7382% ( 9) 00:11:22.850 6.650 - 6.678: 99.7450% ( 6) 00:11:22.850 6.678 - 6.706: 99.7564% ( 10) 00:11:22.850 6.706 - 6.734: 99.7632% ( 6) 00:11:22.850 6.734 - 6.762: 99.7678% ( 4) 00:11:22.850 6.762 - 6.790: 99.7723% ( 4) 00:11:22.850 6.790 - 6.817: 99.7769% ( 4) 00:11:22.850 6.817 - 6.845: 99.7894% ( 11) 00:11:22.850 6.845 - 6.873: 99.8008% ( 10) 00:11:22.850 6.873 - 6.901: 99.8099% ( 8) 00:11:22.850 6.901 - 6.929: 99.8133% ( 3) 00:11:22.850 6.929 - 6.957: 99.8179% ( 4) 00:11:22.850 6.957 - 6.984: 99.8213% ( 3) 00:11:22.850 6.984 - 7.012: 99.8258% ( 4) 00:11:22.850 7.012 - 7.040: 99.8293% ( 3) 00:11:22.850 7.040 - 7.068: 99.8349% ( 5) 00:11:22.850 7.068 - 7.096: 99.8406% ( 5) 00:11:22.850 7.096 - 7.123: 99.8441% ( 3) 00:11:22.850 7.123 - 7.179: 99.8532% ( 8) 00:11:22.850 7.179 - 7.235: 99.8600% ( 6) 00:11:22.850 7.235 - 7.290: 99.8668% ( 6) 00:11:22.850 7.290 - 7.346: 99.8748% ( 7) 00:11:22.850 7.346 - 7.402: 99.8816% ( 6) 00:11:22.850 7.402 - 7.457: 99.8884% ( 6) 00:11:22.850 7.457 - 7.513: 99.8976% ( 8) 00:11:22.850 7.513 - 7.569: 99.9044% ( 6) 00:11:22.850 7.569 - 7.624: 99.9112% ( 6) 00:11:22.850 7.624 - 7.680: 99.9169% ( 5) 00:11:22.850 7.680 - 7.736: 99.9192% ( 2) 00:11:22.850 7.736 - 7.791: 99.9215% ( 2) 00:11:22.850 7.791 - 7.847: 99.9283% ( 6) 00:11:22.850 7.847 - 7.903: 99.9340% ( 5) 00:11:22.850 7.958 - 8.014: 99.9385% ( 4) 00:11:22.850 8.014 - 8.070: 99.9431% ( 4) 00:11:22.850 8.070 - 8.125: 99.9465% ( 3) 00:11:22.850 8.125 - 8.181: 99.9499% ( 3) 00:11:22.850 8.181 - 8.237: 99.9522% ( 2) 00:11:22.850 8.237 - 8.292: 99.9567% ( 4) 00:11:22.850 8.292 - 8.348: 99.9602% ( 3) 00:11:22.850 8.403 - 8.459: 99.9647% ( 4) 00:11:22.850 8.459 - 8.515: 99.9670% ( 2) 00:11:22.850 8.515 - 8.570: 99.9693% ( 2) 00:11:22.850 8.570 - 8.626: 99.9715% ( 2) 00:11:22.850 8.737 - 8.793: 99.9727% ( 1) 00:11:22.850 8.904 - 8.960: 99.9738% ( 1) 00:11:22.850 8.960 - 9.016: 99.9761% ( 2) 00:11:22.851 9.071 - 9.127: 99.9784% ( 2) 00:11:22.851 9.238 - 9.294: 99.9795% ( 1) 00:11:22.851 9.294 - 9.350: 99.9806% ( 1) 00:11:22.851 9.628 - 9.683: 99.9818% ( 1) 00:11:22.851 9.795 - 9.850: 99.9829% ( 1) 00:11:22.851 9.962 - 10.017: 99.9841% ( 1) 00:11:22.851 10.741 - 10.797: 99.9852% ( 1) 00:11:22.851 11.297 - 11.353: 99.9863% ( 1) 00:11:22.851 11.464 - 11.520: 99.9875% ( 1) 00:11:22.851 11.965 - 12.021: 99.9898% ( 2) 00:11:22.851 12.077 - 12.132: 99.9909% ( 1) 00:11:22.851 13.468 - 13.523: 99.9920% ( 1) 00:11:22.851 13.746 - 13.802: 99.9932% ( 1) 00:11:22.851 14.247 - 14.358: 99.9943% ( 1) 00:11:22.851 14.470 - 14.581: 99.9954% ( 1) 00:11:22.851 20.925 - 21.037: 99.9966% ( 1) 00:11:22.851 25.377 - 25.489: 99.9977% ( 1) 00:11:22.851 25.823 - 25.934: 99.9989% ( 1) 00:11:22.851 235.075 - 236.856: 100.0000% ( 1) 00:11:22.851 00:11:22.851 00:11:22.851 real 0m1.347s 00:11:22.851 user 0m1.110s 00:11:22.851 sys 0m0.169s 00:11:22.851 20:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.851 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:22.851 ************************************ 00:11:22.851 END TEST nvme_overhead 00:11:22.851 ************************************ 00:11:22.851 20:06:20 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:11:22.851 20:06:20 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:11:22.851 20:06:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:22.851 20:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:22.851 ************************************ 00:11:22.851 START TEST nvme_arbitration 00:11:22.851 ************************************ 00:11:22.851 20:06:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0 00:11:26.156 Initializing NVMe Controllers 00:11:26.156 Attached to 0000:5e:00.0 00:11:26.156 Associating INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) with lcore 0 00:11:26.156 Associating INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) with lcore 1 00:11:26.156 Associating INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) with lcore 2 00:11:26.156 Associating INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) with lcore 3 00:11:26.156 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:11:26.156 /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:26.156 Initialization complete. Launching workers. 00:11:26.156 Starting thread on core 1 with urgent priority queue 00:11:26.156 Starting thread on core 2 with urgent priority queue 00:11:26.156 Starting thread on core 3 with urgent priority queue 00:11:26.156 Starting thread on core 0 with urgent priority queue 00:11:26.156 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) core 0: 10391.33 IO/s 9.62 secs/100000 ios 00:11:26.156 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) core 1: 10498.00 IO/s 9.53 secs/100000 ios 00:11:26.156 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) core 2: 8958.00 IO/s 11.16 secs/100000 ios 00:11:26.156 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) core 3: 8348.67 IO/s 11.98 secs/100000 ios 00:11:26.156 ======================================================== 00:11:26.156 00:11:26.156 00:11:26.156 real 0m3.366s 00:11:26.156 user 0m9.167s 00:11:26.156 sys 0m0.181s 00:11:26.156 20:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.156 20:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:26.156 ************************************ 00:11:26.156 END TEST nvme_arbitration 00:11:26.156 ************************************ 00:11:26.156 20:06:24 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:26.156 20:06:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:11:26.156 20:06:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.156 20:06:24 -- common/autotest_common.sh@10 -- # set +x 00:11:26.156 ************************************ 00:11:26.156 START TEST nvme_single_aen 00:11:26.156 ************************************ 00:11:26.156 20:06:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log 00:11:26.156 [2024-04-25 20:06:24.064503] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:26.156 [2024-04-25 20:06:24.064540] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.415 [2024-04-25 20:06:24.306593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:11:26.415 [2024-04-25 20:06:24.306638] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2090707) is not found. Dropping the request. 00:11:26.415 [2024-04-25 20:06:24.306666] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2090707) is not found. Dropping the request. 00:11:26.415 [2024-04-25 20:06:24.306687] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2090707) is not found. Dropping the request. 00:11:26.415 [2024-04-25 20:06:24.306703] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2090707) is not found. Dropping the request. 00:11:31.688 Asynchronous Event Request test 00:11:31.688 Attached to 0000:5e:00.0 00:11:31.688 Reset controller to setup AER completions for this process 00:11:31.688 Registering asynchronous event callbacks... 00:11:31.688 Getting orig temperature thresholds of all controllers 00:11:31.688 0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:31.688 Setting all controllers temperature threshold low to trigger AER 00:11:31.688 Waiting for all controllers temperature threshold to be set lower 00:11:31.688 Waiting for all controllers to trigger AER and reset threshold 00:11:31.688 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:31.688 aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0 00:11:31.688 0000:5e:00.0: Current Temperature: 310 Kelvin (37 Celsius) 00:11:31.688 Cleaning up... 00:11:31.688 00:11:31.689 real 0m5.134s 00:11:31.689 user 0m4.185s 00:11:31.689 sys 0m0.881s 00:11:31.689 20:06:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.689 20:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:31.689 ************************************ 00:11:31.689 END TEST nvme_single_aen 00:11:31.689 ************************************ 00:11:31.689 20:06:29 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:31.689 20:06:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:31.689 20:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.689 20:06:29 -- common/autotest_common.sh@10 -- # set +x 00:11:31.689 ************************************ 00:11:31.689 START TEST nvme_doorbell_aers 00:11:31.689 ************************************ 00:11:31.689 20:06:29 -- common/autotest_common.sh@1104 -- # nvme_doorbell_aers 00:11:31.689 20:06:29 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:31.689 20:06:29 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:31.689 20:06:29 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:31.689 20:06:29 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:31.689 20:06:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:31.689 20:06:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:11:31.689 20:06:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:31.689 20:06:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:11:31.689 20:06:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:31.689 20:06:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:11:31.689 20:06:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:11:31.689 20:06:29 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:31.689 20:06:29 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:5e:00.0' 00:11:31.947 [2024-04-25 20:06:29.692857] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2094838) is not found. Dropping the request. 00:11:42.017 Executing: test_write_invalid_db 00:11:42.017 Waiting for AER completion... 00:11:42.017 Failure: test_write_invalid_db 00:11:42.017 00:11:42.017 Executing: test_invalid_db_write_overflow_sq 00:11:42.017 Waiting for AER completion... 00:11:42.017 Failure: test_invalid_db_write_overflow_sq 00:11:42.017 00:11:42.017 Executing: test_invalid_db_write_overflow_cq 00:11:42.017 Waiting for AER completion... 00:11:42.017 Failure: test_invalid_db_write_overflow_cq 00:11:42.017 00:11:42.017 00:11:42.017 real 0m10.121s 00:11:42.017 user 0m6.827s 00:11:42.017 sys 0m3.191s 00:11:42.017 20:06:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.017 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 ************************************ 00:11:42.017 END TEST nvme_doorbell_aers 00:11:42.017 ************************************ 00:11:42.017 20:06:39 -- nvme/nvme.sh@97 -- # uname 00:11:42.017 20:06:39 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:42.017 20:06:39 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:42.017 20:06:39 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:11:42.017 20:06:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:42.017 20:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:42.017 ************************************ 00:11:42.017 START TEST nvme_multi_aen 00:11:42.017 ************************************ 00:11:42.017 20:06:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:11:42.017 [2024-04-25 20:06:39.428245] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:42.017 [2024-04-25 20:06:39.428291] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.017 [2024-04-25 20:06:39.707551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:11:42.017 [2024-04-25 20:06:39.707598] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2094838) is not found. Dropping the request. 00:11:42.017 [2024-04-25 20:06:39.707625] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2094838) is not found. Dropping the request. 00:11:42.017 [2024-04-25 20:06:39.707648] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2094838) is not found. Dropping the request. 00:11:42.017 [2024-04-25 20:06:39.712011] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:11:42.017 [2024-04-25 20:06:39.712117] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.017 Child process pid: 2096877 00:11:47.288 [Child] Asynchronous Event Request test 00:11:47.288 [Child] Attached to 0000:5e:00.0 00:11:47.288 [Child] Registering asynchronous event callbacks... 00:11:47.288 [Child] Getting orig temperature thresholds of all controllers 00:11:47.288 [Child] 0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:47.288 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:47.288 [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:47.288 [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:47.288 [Child] 0000:5e:00.0: Current Temperature: 310 Kelvin (37 Celsius) 00:11:47.288 [Child] Cleaning up... 00:11:47.288 [Child] 0000:5e:00.0: Current Temperature: 310 Kelvin (37 Celsius) 00:11:47.288 Asynchronous Event Request test 00:11:47.288 Attached to 0000:5e:00.0 00:11:47.288 Reset controller to setup AER completions for this process 00:11:47.288 Registering asynchronous event callbacks... 00:11:47.288 Getting orig temperature thresholds of all controllers 00:11:47.288 0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:47.288 Setting all controllers temperature threshold low to trigger AER 00:11:47.288 Waiting for all controllers temperature threshold to be set lower 00:11:47.288 Waiting for all controllers to trigger AER and reset threshold 00:11:47.288 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:47.288 aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0 00:11:47.288 0000:5e:00.0: Current Temperature: 310 Kelvin (37 Celsius) 00:11:47.288 Cleaning up... 00:11:47.288 00:11:47.288 real 0m4.803s 00:11:47.288 user 0m3.647s 00:11:47.288 sys 0m2.142s 00:11:47.288 20:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.288 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:47.288 ************************************ 00:11:47.288 END TEST nvme_multi_aen 00:11:47.288 ************************************ 00:11:47.288 20:06:44 -- nvme/nvme.sh@99 -- # run_test nvme_startup /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:11:47.288 20:06:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:47.288 20:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.288 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:47.288 ************************************ 00:11:47.288 START TEST nvme_startup 00:11:47.288 ************************************ 00:11:47.289 20:06:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000 00:11:47.289 Initializing NVMe Controllers 00:11:47.289 Attached to 0000:5e:00.0 00:11:47.289 Initialization complete. 00:11:47.289 Time used:236165.031 (us). 00:11:47.289 00:11:47.289 real 0m0.285s 00:11:47.289 user 0m0.088s 00:11:47.289 sys 0m0.158s 00:11:47.289 20:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.289 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:47.289 ************************************ 00:11:47.289 END TEST nvme_startup 00:11:47.289 ************************************ 00:11:47.289 20:06:44 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:47.289 20:06:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:11:47.289 20:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:47.289 20:06:44 -- common/autotest_common.sh@10 -- # set +x 00:11:47.289 ************************************ 00:11:47.289 START TEST nvme_multi_secondary 00:11:47.289 ************************************ 00:11:47.289 20:06:44 -- common/autotest_common.sh@1104 -- # nvme_multi_secondary 00:11:47.289 20:06:44 -- nvme/nvme.sh@52 -- # pid0=2097628 00:11:47.289 20:06:44 -- nvme/nvme.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:47.289 20:06:44 -- nvme/nvme.sh@54 -- # pid1=2097629 00:11:47.289 20:06:44 -- nvme/nvme.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:47.289 20:06:44 -- nvme/nvme.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:50.579 Initializing NVMe Controllers 00:11:50.579 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:50.579 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1 00:11:50.579 Initialization complete. Launching workers. 00:11:50.579 ======================================================== 00:11:50.579 Latency(us) 00:11:50.579 Device Information : IOPS MiB/s Average min max 00:11:50.579 PCIE (0000:5e:00.0) NSID 1 from core 1: 78978.68 308.51 202.27 50.93 3588.43 00:11:50.579 ======================================================== 00:11:50.579 Total : 78978.68 308.51 202.27 50.93 3588.43 00:11:50.579 00:11:50.579 Initializing NVMe Controllers 00:11:50.579 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:50.579 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2 00:11:50.579 Initialization complete. Launching workers. 00:11:50.579 ======================================================== 00:11:50.579 Latency(us) 00:11:50.579 Device Information : IOPS MiB/s Average min max 00:11:50.579 PCIE (0000:5e:00.0) NSID 1 from core 2: 39496.33 154.28 405.18 23.81 6450.62 00:11:50.579 ======================================================== 00:11:50.579 Total : 39496.33 154.28 405.18 23.81 6450.62 00:11:50.579 00:11:50.579 20:06:48 -- nvme/nvme.sh@56 -- # wait 2097628 00:11:52.484 Initializing NVMe Controllers 00:11:52.484 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:52.484 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:11:52.484 Initialization complete. Launching workers. 00:11:52.484 ======================================================== 00:11:52.484 Latency(us) 00:11:52.484 Device Information : IOPS MiB/s Average min max 00:11:52.484 PCIE (0000:5e:00.0) NSID 1 from core 0: 80805.60 315.65 197.69 25.65 3526.15 00:11:52.484 ======================================================== 00:11:52.484 Total : 80805.60 315.65 197.69 25.65 3526.15 00:11:52.484 00:11:52.484 20:06:50 -- nvme/nvme.sh@57 -- # wait 2097629 00:11:52.484 20:06:50 -- nvme/nvme.sh@61 -- # pid0=2098359 00:11:52.484 20:06:50 -- nvme/nvme.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:52.484 20:06:50 -- nvme/nvme.sh@63 -- # pid1=2098360 00:11:52.484 20:06:50 -- nvme/nvme.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:52.484 20:06:50 -- nvme/nvme.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:55.773 Initializing NVMe Controllers 00:11:55.773 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:55.773 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:11:55.773 Initialization complete. Launching workers. 00:11:55.773 ======================================================== 00:11:55.773 Latency(us) 00:11:55.773 Device Information : IOPS MiB/s Average min max 00:11:55.773 PCIE (0000:5e:00.0) NSID 1 from core 0: 78656.87 307.25 203.10 25.40 3545.98 00:11:55.773 ======================================================== 00:11:55.773 Total : 78656.87 307.25 203.10 25.40 3545.98 00:11:55.773 00:11:55.773 Initializing NVMe Controllers 00:11:55.773 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:55.773 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1 00:11:55.773 Initialization complete. Launching workers. 00:11:55.773 ======================================================== 00:11:55.773 Latency(us) 00:11:55.773 Device Information : IOPS MiB/s Average min max 00:11:55.773 PCIE (0000:5e:00.0) NSID 1 from core 1: 78766.75 307.68 202.81 25.61 3549.56 00:11:55.773 ======================================================== 00:11:55.773 Total : 78766.75 307.68 202.81 25.61 3549.56 00:11:55.773 00:11:57.678 Initializing NVMe Controllers 00:11:57.678 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:11:57.678 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2 00:11:57.678 Initialization complete. Launching workers. 00:11:57.678 ======================================================== 00:11:57.678 Latency(us) 00:11:57.678 Device Information : IOPS MiB/s Average min max 00:11:57.678 PCIE (0000:5e:00.0) NSID 1 from core 2: 41992.65 164.03 380.67 23.19 7341.47 00:11:57.678 ======================================================== 00:11:57.678 Total : 41992.65 164.03 380.67 23.19 7341.47 00:11:57.678 00:11:57.678 20:06:55 -- nvme/nvme.sh@65 -- # wait 2098359 00:11:57.678 20:06:55 -- nvme/nvme.sh@66 -- # wait 2098360 00:11:57.678 00:11:57.678 real 0m10.880s 00:11:57.678 user 0m18.498s 00:11:57.678 sys 0m1.130s 00:11:57.678 20:06:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.678 20:06:55 -- common/autotest_common.sh@10 -- # set +x 00:11:57.678 ************************************ 00:11:57.678 END TEST nvme_multi_secondary 00:11:57.678 ************************************ 00:11:57.678 20:06:55 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:57.678 20:06:55 -- nvme/nvme.sh@102 -- # kill_stub 00:11:57.678 20:06:55 -- common/autotest_common.sh@1065 -- # [[ -e /proc/2090038 ]] 00:11:57.678 20:06:55 -- common/autotest_common.sh@1066 -- # kill 2090038 00:11:57.678 20:06:55 -- common/autotest_common.sh@1067 -- # wait 2090038 00:11:58.246 [2024-04-25 20:06:56.029314] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2096876) is not found. Dropping the request. 00:11:58.246 [2024-04-25 20:06:56.029412] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2096876) is not found. Dropping the request. 00:11:58.246 [2024-04-25 20:06:56.029452] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2096876) is not found. Dropping the request. 00:11:58.246 [2024-04-25 20:06:56.029489] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2096876) is not found. Dropping the request. 00:12:02.443 20:06:59 -- common/autotest_common.sh@1069 -- # rm -f /var/run/spdk_stub0 00:12:02.443 20:06:59 -- common/autotest_common.sh@1073 -- # echo 2 00:12:02.443 20:06:59 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:02.443 20:06:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:02.443 20:06:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:02.443 20:06:59 -- common/autotest_common.sh@10 -- # set +x 00:12:02.443 ************************************ 00:12:02.443 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:02.443 ************************************ 00:12:02.443 20:06:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:02.443 * Looking for test storage... 00:12:02.443 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:12:02.443 20:06:59 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:02.443 20:07:00 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:02.443 20:07:00 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:02.443 20:07:00 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:02.443 20:07:00 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:02.443 20:07:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:02.443 20:07:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:02.443 20:07:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:02.443 20:07:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:12:02.443 20:07:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:02.443 20:07:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:12:02.443 20:07:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:12:02.443 20:07:00 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:5e:00.0 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:5e:00.0 ']' 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=2099782 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0xF 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:02.443 20:07:00 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 2099782 00:12:02.443 20:07:00 -- common/autotest_common.sh@819 -- # '[' -z 2099782 ']' 00:12:02.443 20:07:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.443 20:07:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:02.443 20:07:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.443 20:07:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:02.443 20:07:00 -- common/autotest_common.sh@10 -- # set +x 00:12:02.443 [2024-04-25 20:07:00.157457] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:02.444 [2024-04-25 20:07:00.157528] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099782 ] 00:12:02.444 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.444 [2024-04-25 20:07:00.267913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.444 [2024-04-25 20:07:00.371004] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:02.444 [2024-04-25 20:07:00.371203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.444 [2024-04-25 20:07:00.371225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.444 [2024-04-25 20:07:00.371327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.444 [2024-04-25 20:07:00.371328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.703 [2024-04-25 20:07:00.565783] 'OCF_Core' volume operations registered 00:12:02.703 [2024-04-25 20:07:00.569257] 'OCF_Cache' volume operations registered 00:12:02.703 [2024-04-25 20:07:00.573201] 'OCF Composite' volume operations registered 00:12:02.703 [2024-04-25 20:07:00.576715] 'SPDK_block_device' volume operations registered 00:12:03.271 20:07:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:03.271 20:07:01 -- common/autotest_common.sh@852 -- # return 0 00:12:03.271 20:07:01 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0 00:12:03.271 20:07:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:03.271 20:07:01 -- common/autotest_common.sh@10 -- # set +x 00:12:06.561 nvme0n1 00:12:06.561 20:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_GJczA.txt 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:06.561 20:07:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.561 20:07:03 -- common/autotest_common.sh@10 -- # set +x 00:12:06.561 true 00:12:06.561 20:07:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1714068423 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=2100243 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:06.561 20:07:03 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:08.474 20:07:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:08.474 20:07:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.474 20:07:05 -- common/autotest_common.sh@10 -- # set +x 00:12:08.474 [2024-04-25 20:07:05.972014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:12:08.474 [2024-04-25 20:07:05.972236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:08.474 [2024-04-25 20:07:05.972258] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:08.474 [2024-04-25 20:07:05.972275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:08.474 [2024-04-25 20:07:05.973408] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:08.474 20:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:08.474 20:07:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 2100243 00:12:08.474 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 2100243 00:12:08.474 20:07:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 2100243 00:12:08.474 20:07:05 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:08.474 20:07:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:12:08.474 20:07:06 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:08.474 20:07:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:08.474 20:07:06 -- common/autotest_common.sh@10 -- # set +x 00:12:11.769 20:07:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:11.769 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:11.769 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_GJczA.txt 00:12:12.077 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_GJczA.txt 00:12:12.078 20:07:09 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 2099782 00:12:12.078 20:07:09 -- common/autotest_common.sh@926 -- # '[' -z 2099782 ']' 00:12:12.078 20:07:09 -- common/autotest_common.sh@930 -- # kill -0 2099782 00:12:12.078 20:07:09 -- common/autotest_common.sh@931 -- # uname 00:12:12.078 20:07:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:12.078 20:07:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2099782 00:12:12.078 20:07:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:12.078 20:07:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:12.078 20:07:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2099782' 00:12:12.078 killing process with pid 2099782 00:12:12.078 20:07:09 -- common/autotest_common.sh@945 -- # kill 2099782 00:12:12.078 20:07:09 -- common/autotest_common.sh@950 -- # wait 2099782 00:12:12.665 20:07:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:12.665 20:07:10 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:12.665 00:12:12.665 real 0m10.439s 00:12:12.665 user 0m39.232s 00:12:12.665 sys 0m0.857s 00:12:12.665 20:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.665 20:07:10 -- common/autotest_common.sh@10 -- # set +x 00:12:12.665 ************************************ 00:12:12.665 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:12.665 ************************************ 00:12:12.665 20:07:10 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:12.665 20:07:10 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:12.665 20:07:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:12.666 20:07:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:12.666 20:07:10 -- common/autotest_common.sh@10 -- # set +x 00:12:12.666 ************************************ 00:12:12.666 START TEST nvme_fio 00:12:12.666 ************************************ 00:12:12.666 20:07:10 -- common/autotest_common.sh@1104 -- # nvme_fio_test 00:12:12.666 20:07:10 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme 00:12:12.666 20:07:10 -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:12.666 20:07:10 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:12.666 20:07:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:12.666 20:07:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:12.666 20:07:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:12.666 20:07:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:12:12.666 20:07:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:12.666 20:07:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:12:12.666 20:07:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:12:12.666 20:07:10 -- nvme/nvme.sh@33 -- # bdfs=('0000:5e:00.0') 00:12:12.666 20:07:10 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:12.666 20:07:10 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:12.666 20:07:10 -- nvme/nvme.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' 00:12:12.666 20:07:10 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:12.666 EAL: No free 2048 kB hugepages reported on node 1 00:12:19.234 20:07:17 -- nvme/nvme.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' 00:12:19.234 20:07:17 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:19.234 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.800 20:07:23 -- nvme/nvme.sh@41 -- # bs=4096 00:12:25.800 20:07:23 -- nvme/nvme.sh@43 -- # fio_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096 00:12:25.800 20:07:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096 00:12:25.800 20:07:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:12:25.800 20:07:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:25.800 20:07:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:12:25.800 20:07:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:12:25.800 20:07:23 -- common/autotest_common.sh@1320 -- # shift 00:12:25.800 20:07:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:12:25.800 20:07:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:25.800 20:07:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:12:25.800 20:07:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:12:25.800 20:07:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:25.800 20:07:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:12:25.800 20:07:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:12:25.800 20:07:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:12:25.800 20:07:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme 00:12:25.801 20:07:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:12:25.801 20:07:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:12:25.801 20:07:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:12:25.801 20:07:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:12:25.801 20:07:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme' 00:12:25.801 20:07:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096 00:12:26.059 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:26.059 fio-3.35 00:12:26.059 Starting 1 thread 00:12:26.059 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.041 00:12:36.041 test: (groupid=0, jobs=1): err= 0: pid=2103440: Thu Apr 25 20:07:32 2024 00:12:36.041 read: IOPS=56.2k, BW=220MiB/s (230MB/s)(439MiB/2001msec) 00:12:36.041 slat (nsec): min=4496, max=44890, avg=4778.26, stdev=433.26 00:12:36.041 clat (usec): min=219, max=1657, avg=1122.60, stdev=17.32 00:12:36.041 lat (usec): min=224, max=1662, avg=1127.38, stdev=17.33 00:12:36.041 clat percentiles (usec): 00:12:36.041 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[ 1106], 20.00th=[ 1123], 00:12:36.041 | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1123], 00:12:36.041 | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1123], 95.00th=[ 1139], 00:12:36.041 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:36.041 | 99.99th=[ 1319] 00:12:36.041 bw ( KiB/s): min=218584, max=227168, per=99.74%, avg=224277.33, stdev=4930.77, samples=3 00:12:36.041 iops : min=54646, max=56792, avg=56069.33, stdev=1232.69, samples=3 00:12:36.041 write: IOPS=56.1k, BW=219MiB/s (230MB/s)(438MiB/2001msec); 0 zone resets 00:12:36.041 slat (nsec): min=4557, max=120256, avg=4892.44, stdev=557.18 00:12:36.041 clat (usec): min=201, max=1320, avg=1122.89, stdev=15.52 00:12:36.041 lat (usec): min=206, max=1325, avg=1127.78, stdev=15.54 00:12:36.041 clat percentiles (usec): 00:12:36.041 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[ 1106], 20.00th=[ 1123], 00:12:36.041 | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1123], 60.00th=[ 1123], 00:12:36.041 | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1123], 95.00th=[ 1139], 00:12:36.041 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1237], 99.95th=[ 1237], 00:12:36.041 | 99.99th=[ 1287] 00:12:36.041 bw ( KiB/s): min=218360, max=226584, per=99.60%, avg=223386.67, stdev=4406.63, samples=3 00:12:36.041 iops : min=54590, max=56646, avg=55846.67, stdev=1101.66, samples=3 00:12:36.041 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.04% 00:12:36.041 lat (msec) : 2=99.92% 00:12:36.041 cpu : usr=99.50%, sys=0.05%, ctx=3, majf=0, minf=4 00:12:36.041 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:12:36.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.041 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.041 issued rwts: total=112482,112195,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.041 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.041 00:12:36.041 Run status group 0 (all jobs): 00:12:36.041 READ: bw=220MiB/s (230MB/s), 220MiB/s-220MiB/s (230MB/s-230MB/s), io=439MiB (461MB), run=2001-2001msec 00:12:36.041 WRITE: bw=219MiB/s (230MB/s), 219MiB/s-219MiB/s (230MB/s-230MB/s), io=438MiB (460MB), run=2001-2001msec 00:12:36.041 20:07:32 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:36.041 20:07:32 -- nvme/nvme.sh@46 -- # true 00:12:36.041 00:12:36.041 real 0m22.274s 00:12:36.041 user 0m20.167s 00:12:36.041 sys 0m2.463s 00:12:36.041 20:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.041 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 END TEST nvme_fio 00:12:36.041 ************************************ 00:12:36.041 00:12:36.041 real 1m45.886s 00:12:36.041 user 4m2.396s 00:12:36.041 sys 0m17.758s 00:12:36.041 20:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.041 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 END TEST nvme 00:12:36.041 ************************************ 00:12:36.041 20:07:32 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:12:36.041 20:07:32 -- spdk/autotest.sh@227 -- # run_test nvme_scc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:12:36.041 20:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:36.041 20:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.041 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 START TEST nvme_scc 00:12:36.041 ************************************ 00:12:36.041 20:07:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh 00:12:36.041 * Looking for test storage... 00:12:36.041 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:12:36.041 20:07:32 -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:12:36.041 20:07:32 -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:12:36.041 20:07:32 -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:12:36.041 20:07:32 -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:12:36.041 20:07:32 -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:12:36.041 20:07:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.041 20:07:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.041 20:07:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.041 20:07:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.041 20:07:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.041 20:07:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.041 20:07:32 -- paths/export.sh@5 -- # export PATH 00:12:36.041 20:07:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.041 20:07:32 -- nvme/functions.sh@10 -- # ctrls=() 00:12:36.041 20:07:32 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:36.041 20:07:32 -- nvme/functions.sh@11 -- # nvmes=() 00:12:36.041 20:07:32 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:36.041 20:07:32 -- nvme/functions.sh@12 -- # bdfs=() 00:12:36.041 20:07:32 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:36.041 20:07:32 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:36.041 20:07:32 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:36.041 20:07:32 -- nvme/functions.sh@14 -- # nvme_name= 00:12:36.041 20:07:32 -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:12:36.041 20:07:32 -- nvme/nvme_scc.sh@12 -- # uname 00:12:36.041 20:07:32 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:36.041 20:07:32 -- nvme/nvme_scc.sh@12 -- # [[ ............................... == QEMU ]] 00:12:36.041 20:07:32 -- nvme/nvme_scc.sh@12 -- # exit 0 00:12:36.041 00:12:36.041 real 0m0.115s 00:12:36.041 user 0m0.049s 00:12:36.041 sys 0m0.077s 00:12:36.041 20:07:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.041 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 END TEST nvme_scc 00:12:36.041 ************************************ 00:12:36.041 20:07:32 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:12:36.041 20:07:32 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:12:36.041 20:07:32 -- spdk/autotest.sh@233 -- # run_test nvme_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:12:36.041 20:07:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:36.041 20:07:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.041 20:07:32 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 START TEST nvme_cuse 00:12:36.041 ************************************ 00:12:36.041 20:07:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh 00:12:36.041 * Looking for test storage... 00:12:36.041 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:12:36.041 20:07:33 -- cuse/nvme_cuse.sh@11 -- # uname 00:12:36.041 20:07:33 -- cuse/nvme_cuse.sh@11 -- # [[ Linux != \L\i\n\u\x ]] 00:12:36.041 20:07:33 -- cuse/nvme_cuse.sh@16 -- # modprobe cuse 00:12:36.041 20:07:33 -- cuse/nvme_cuse.sh@17 -- # run_test nvme_cuse_app /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:12:36.041 20:07:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:36.041 20:07:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.041 20:07:33 -- common/autotest_common.sh@10 -- # set +x 00:12:36.041 ************************************ 00:12:36.041 START TEST nvme_cuse_app 00:12:36.041 ************************************ 00:12:36.041 20:07:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse 00:12:36.041 00:12:36.041 00:12:36.041 CUnit - A unit testing framework for C - Version 2.1-3 00:12:36.041 http://cunit.sourceforge.net/ 00:12:36.041 00:12:36.041 00:12:36.041 Suite: nvme_cuse 00:12:48.259 Test: test_cuse_update ...passed 00:12:48.259 00:12:48.259 Run Summary: Type Total Ran Passed Failed Inactive 00:12:48.259 suites 1 1 n/a 0 0 00:12:48.259 tests 1 1 1 0 0 00:12:48.259 asserts 925 925 925 0 n/a 00:12:48.259 00:12:48.259 Elapsed time = 0.023 seconds 00:12:48.259 00:12:48.259 real 0m11.528s 00:12:48.259 user 0m0.007s 00:12:48.259 sys 0m0.029s 00:12:48.259 20:07:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.259 20:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:48.259 ************************************ 00:12:48.259 END TEST nvme_cuse_app 00:12:48.259 ************************************ 00:12:48.260 20:07:44 -- cuse/nvme_cuse.sh@18 -- # run_test nvme_cuse_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:12:48.260 20:07:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:12:48.260 20:07:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.260 20:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:48.260 ************************************ 00:12:48.260 START TEST nvme_cuse_rpc 00:12:48.260 ************************************ 00:12:48.260 20:07:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh 00:12:48.260 * Looking for test storage... 00:12:48.260 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@13 -- # get_first_nvme_bdf 00:12:48.260 20:07:44 -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:48.260 20:07:44 -- common/autotest_common.sh@1509 -- # local bdfs 00:12:48.260 20:07:44 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:48.260 20:07:44 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:48.260 20:07:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:48.260 20:07:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:12:48.260 20:07:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:48.260 20:07:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:12:48.260 20:07:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:48.260 20:07:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:12:48.260 20:07:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:12:48.260 20:07:44 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@13 -- # bdf=0000:5e:00.0 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@14 -- # ctrlr_base=/dev/spdk/nvme 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@17 -- # spdk_tgt_pid=2106024 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@18 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:48.260 20:07:44 -- cuse/nvme_cuse_rpc.sh@20 -- # waitforlisten 2106024 00:12:48.260 20:07:44 -- common/autotest_common.sh@819 -- # '[' -z 2106024 ']' 00:12:48.260 20:07:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.260 20:07:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:48.260 20:07:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.260 20:07:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:48.260 20:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:48.260 [2024-04-25 20:07:44.887201] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:12:48.260 [2024-04-25 20:07:44.887280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106024 ] 00:12:48.260 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.260 [2024-04-25 20:07:44.993865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:48.260 [2024-04-25 20:07:45.088663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:48.260 [2024-04-25 20:07:45.088862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.260 [2024-04-25 20:07:45.088867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.260 [2024-04-25 20:07:45.281563] 'OCF_Core' volume operations registered 00:12:48.260 [2024-04-25 20:07:45.285146] 'OCF_Cache' volume operations registered 00:12:48.260 [2024-04-25 20:07:45.289097] 'OCF Composite' volume operations registered 00:12:48.260 [2024-04-25 20:07:45.292586] 'SPDK_block_device' volume operations registered 00:12:48.260 20:07:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:48.260 20:07:45 -- common/autotest_common.sh@852 -- # return 0 00:12:48.260 20:07:45 -- cuse/nvme_cuse_rpc.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:12:51.551 Nvme0n1 00:12:51.551 20:07:48 -- cuse/nvme_cuse_rpc.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:12:51.551 [2024-04-25 20:07:48.993170] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:51.551 [2024-04-25 20:07:48.993345] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:51.551 [2024-04-25 20:07:48.993463] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:51.551 20:07:49 -- cuse/nvme_cuse_rpc.sh@25 -- # sleep 5 00:12:56.831 20:07:54 -- cuse/nvme_cuse_rpc.sh@27 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:12:56.831 20:07:54 -- cuse/nvme_cuse_rpc.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:12:56.831 [ 00:12:56.831 { 00:12:56.831 "name": "Nvme0n1", 00:12:56.831 "aliases": [ 00:12:56.831 "82a06331-dcd8-4b44-a780-d722ff5bb8ba" 00:12:56.831 ], 00:12:56.831 "product_name": "NVMe disk", 00:12:56.831 "block_size": 512, 00:12:56.831 "num_blocks": 7814037168, 00:12:56.831 "uuid": "82a06331-dcd8-4b44-a780-d722ff5bb8ba", 00:12:56.831 "assigned_rate_limits": { 00:12:56.831 "rw_ios_per_sec": 0, 00:12:56.831 "rw_mbytes_per_sec": 0, 00:12:56.831 "r_mbytes_per_sec": 0, 00:12:56.831 "w_mbytes_per_sec": 0 00:12:56.831 }, 00:12:56.831 "claimed": false, 00:12:56.831 "zoned": false, 00:12:56.831 "supported_io_types": { 00:12:56.831 "read": true, 00:12:56.831 "write": true, 00:12:56.831 "unmap": true, 00:12:56.831 "write_zeroes": true, 00:12:56.831 "flush": true, 00:12:56.831 "reset": true, 00:12:56.831 "compare": false, 00:12:56.831 "compare_and_write": false, 00:12:56.831 "abort": true, 00:12:56.831 "nvme_admin": true, 00:12:56.831 "nvme_io": true 00:12:56.831 }, 00:12:56.831 "driver_specific": { 00:12:56.831 "nvme": [ 00:12:56.831 { 00:12:56.831 "pci_address": "0000:5e:00.0", 00:12:56.831 "trid": { 00:12:56.831 "trtype": "PCIe", 00:12:56.831 "traddr": "0000:5e:00.0" 00:12:56.831 }, 00:12:56.831 "cuse_device": "spdk/nvme0n1", 00:12:56.831 "ctrlr_data": { 00:12:56.831 "cntlid": 0, 00:12:56.831 "vendor_id": "0x8086", 00:12:56.831 "model_number": "INTEL SSDPE2KX040T8", 00:12:56.831 "serial_number": "BTLJ83030AK84P0DGN", 00:12:56.831 "firmware_revision": "VDV10184", 00:12:56.831 "oacs": { 00:12:56.831 "security": 0, 00:12:56.831 "format": 1, 00:12:56.831 "firmware": 1, 00:12:56.831 "ns_manage": 1 00:12:56.831 }, 00:12:56.831 "multi_ctrlr": false, 00:12:56.831 "ana_reporting": false 00:12:56.831 }, 00:12:56.831 "vs": { 00:12:56.831 "nvme_version": "1.2" 00:12:56.831 }, 00:12:56.831 "ns_data": { 00:12:56.831 "id": 1, 00:12:56.831 "can_share": false 00:12:56.831 } 00:12:56.831 } 00:12:56.831 ], 00:12:56.831 "mp_policy": "active_passive" 00:12:56.831 } 00:12:56.831 } 00:12:56.831 ] 00:12:56.831 20:07:54 -- cuse/nvme_cuse_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:12:56.831 [ 00:12:56.831 { 00:12:56.831 "name": "Nvme0", 00:12:56.831 "ctrlrs": [ 00:12:56.831 { 00:12:56.831 "state": "enabled", 00:12:56.831 "cuse_device": "spdk/nvme0", 00:12:56.831 "trid": { 00:12:56.831 "trtype": "PCIe", 00:12:56.831 "traddr": "0000:5e:00.0" 00:12:56.831 }, 00:12:56.831 "cntlid": 0, 00:12:56.831 "host": { 00:12:56.831 "nqn": "nqn.2014-08.org.nvmexpress:uuid:1731c482-4281-489c-be76-67426cad4420", 00:12:56.831 "addr": "", 00:12:56.831 "svcid": "" 00:12:56.831 } 00:12:56.831 } 00:12:56.831 ] 00:12:56.831 } 00:12:56.831 ] 00:12:56.831 20:07:54 -- cuse/nvme_cuse_rpc.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:12:57.090 20:07:55 -- cuse/nvme_cuse_rpc.sh@35 -- # sleep 1 00:12:58.524 20:07:56 -- cuse/nvme_cuse_rpc.sh@36 -- # '[' -c /dev/spdk/nvme0 ']' 00:12:58.524 20:07:56 -- cuse/nvme_cuse_rpc.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0 00:12:58.524 [2024-04-25 20:07:56.247593] nvme_cuse.c:1343:spdk_nvme_cuse_unregister: *ERROR*: Cannot find associated CUSE device 00:12:58.524 request: 00:12:58.524 { 00:12:58.524 "name": "Nvme0", 00:12:58.524 "method": "bdev_nvme_cuse_unregister", 00:12:58.524 "req_id": 1 00:12:58.524 } 00:12:58.524 Got JSON-RPC error response 00:12:58.524 response: 00:12:58.524 { 00:12:58.524 "code": -19, 00:12:58.524 "message": "No such device" 00:12:58.524 } 00:12:58.524 20:07:56 -- cuse/nvme_cuse_rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:12:58.792 [2024-04-25 20:07:56.486550] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:12:58.792 [2024-04-25 20:07:56.486696] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:12:58.792 [2024-04-25 20:07:56.486783] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:12:58.792 20:07:56 -- cuse/nvme_cuse_rpc.sh@44 -- # sleep 1 00:12:59.729 20:07:57 -- cuse/nvme_cuse_rpc.sh@46 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:12:59.729 20:07:57 -- cuse/nvme_cuse_rpc.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:12:59.729 [2024-04-25 20:07:57.660790] bdev_nvme_cuse_rpc.c: 57:rpc_nvme_cuse_register: *ERROR*: Failed to register CUSE devices: File exists 00:12:59.988 request: 00:12:59.988 { 00:12:59.988 "name": "Nvme0", 00:12:59.988 "method": "bdev_nvme_cuse_register", 00:12:59.988 "req_id": 1 00:12:59.988 } 00:12:59.988 Got JSON-RPC error response 00:12:59.988 response: 00:12:59.988 { 00:12:59.988 "code": -17, 00:12:59.988 "message": "File exists" 00:12:59.988 } 00:12:59.988 20:07:57 -- cuse/nvme_cuse_rpc.sh@52 -- # sleep 1 00:13:00.925 20:07:58 -- cuse/nvme_cuse_rpc.sh@54 -- # '[' -c /dev/spdk/nvme1 ']' 00:13:00.925 20:07:58 -- cuse/nvme_cuse_rpc.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:06.196 20:08:03 -- cuse/nvme_cuse_rpc.sh@60 -- # trap - SIGINT SIGTERM EXIT 00:13:06.196 20:08:03 -- cuse/nvme_cuse_rpc.sh@61 -- # killprocess 2106024 00:13:06.196 20:08:03 -- common/autotest_common.sh@926 -- # '[' -z 2106024 ']' 00:13:06.196 20:08:03 -- common/autotest_common.sh@930 -- # kill -0 2106024 00:13:06.196 20:08:03 -- common/autotest_common.sh@931 -- # uname 00:13:06.196 20:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:06.196 20:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2106024 00:13:06.196 20:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:06.196 20:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:06.196 20:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2106024' 00:13:06.197 killing process with pid 2106024 00:13:06.197 20:08:03 -- common/autotest_common.sh@945 -- # kill 2106024 00:13:06.197 20:08:03 -- common/autotest_common.sh@950 -- # wait 2106024 00:13:06.197 00:13:06.197 real 0m19.172s 00:13:06.197 user 0m37.109s 00:13:06.197 sys 0m1.083s 00:13:06.197 20:08:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.197 20:08:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.197 ************************************ 00:13:06.197 END TEST nvme_cuse_rpc 00:13:06.197 ************************************ 00:13:06.197 20:08:03 -- cuse/nvme_cuse.sh@19 -- # run_test nvme_cli_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:13:06.197 20:08:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:06.197 20:08:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.197 20:08:03 -- common/autotest_common.sh@10 -- # set +x 00:13:06.197 ************************************ 00:13:06.197 START TEST nvme_cli_cuse 00:13:06.197 ************************************ 00:13:06.197 20:08:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh 00:13:06.197 * Looking for test storage... 00:13:06.197 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:13:06.197 20:08:03 -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:13:06.197 20:08:03 -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:13:06.197 20:08:03 -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:13:06.197 20:08:03 -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:13:06.197 20:08:03 -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:13:06.197 20:08:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.197 20:08:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.197 20:08:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.197 20:08:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.197 20:08:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.197 20:08:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.197 20:08:03 -- paths/export.sh@5 -- # export PATH 00:13:06.197 20:08:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.197 20:08:03 -- nvme/functions.sh@10 -- # ctrls=() 00:13:06.197 20:08:03 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:06.197 20:08:03 -- nvme/functions.sh@11 -- # nvmes=() 00:13:06.197 20:08:03 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:06.197 20:08:03 -- nvme/functions.sh@12 -- # bdfs=() 00:13:06.197 20:08:03 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:06.197 20:08:03 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:06.197 20:08:03 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:06.197 20:08:03 -- nvme/functions.sh@14 -- # nvme_name= 00:13:06.197 20:08:03 -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@10 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@11 -- # mkdir /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@13 -- # KERNEL_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@14 -- # CUSE_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@16 -- # NVME_CMD=/usr/local/src/nvme-cli/nvme 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@17 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:06.197 20:08:03 -- cuse/spdk_nvme_cli_cuse.sh@19 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:13:09.488 Waiting for block devices as requested 00:13:09.488 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:13:09.488 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:09.488 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:09.488 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:09.747 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:09.747 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:09.747 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:10.006 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:10.006 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:10.006 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:10.265 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:10.265 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:10.265 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:10.524 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:10.524 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:10.524 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:10.787 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:10.787 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@20 -- # scan_nvme_ctrls 00:13:10.787 20:08:08 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:10.787 20:08:08 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:10.787 20:08:08 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@49 -- # pci=0000:5e:00.0 00:13:10.787 20:08:08 -- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0 00:13:10.787 20:08:08 -- scripts/common.sh@15 -- # local i 00:13:10.787 20:08:08 -- scripts/common.sh@18 -- # [[ =~ 0000:5e:00.0 ]] 00:13:10.787 20:08:08 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:10.787 20:08:08 -- scripts/common.sh@24 -- # return 0 00:13:10.787 20:08:08 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:10.787 20:08:08 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:10.787 20:08:08 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@18 -- # shift 00:13:10.787 20:08:08 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n BTLJ83030AK84P0DGN ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN "' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN ' 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n VDV10184 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.787 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.787 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:13:10.787 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.788 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:10.788 20:08:08 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:10.788 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.789 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:10.789 20:08:08 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:10.789 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:10.790 20:08:08 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:10.790 20:08:08 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:10.790 20:08:08 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:10.790 20:08:08 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@18 -- # shift 00:13:10.790 20:08:08 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:10.790 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.790 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.790 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 010000000f3d00000000000000000000 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000000f3d00000000000000000000"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000000f3d00000000000000000000 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n 0000000000000f3d ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000f3d"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000f3d 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:13:10.791 20:08:08 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # IFS=: 00:13:10.791 20:08:08 -- nvme/functions.sh@21 -- # read -r reg val 00:13:10.791 20:08:08 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:10.791 20:08:08 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:10.791 20:08:08 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0 00:13:10.791 20:08:08 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@22 -- # get_nvme_with_ns_management 00:13:10.791 20:08:08 -- nvme/functions.sh@153 -- # local _ctrls 00:13:10.791 20:08:08 -- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:13:10.791 20:08:08 -- nvme/functions.sh@155 -- # get_nvmes_with_ns_management 00:13:10.791 20:08:08 -- nvme/functions.sh@144 -- # (( 1 == 0 )) 00:13:10.791 20:08:08 -- nvme/functions.sh@146 -- # local ctrl 00:13:10.791 20:08:08 -- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}" 00:13:10.791 20:08:08 -- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt 00:13:10.791 20:08:08 -- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt 00:13:10.791 20:08:08 -- nvme/functions.sh@122 -- # local -A bits 00:13:10.791 20:08:08 -- nvme/functions.sh@125 -- # bits["ss/sr"]=1 00:13:10.791 20:08:08 -- nvme/functions.sh@126 -- # bits["fnvme"]=2 00:13:10.791 20:08:08 -- nvme/functions.sh@127 -- # bits["fc/fi"]=4 00:13:10.791 20:08:08 -- nvme/functions.sh@128 -- # bits["nsmgt"]=8 00:13:10.791 20:08:08 -- nvme/functions.sh@129 -- # bits["self-test"]=16 00:13:10.791 20:08:08 -- nvme/functions.sh@130 -- # bits["directives"]=32 00:13:10.791 20:08:08 -- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64 00:13:10.791 20:08:08 -- nvme/functions.sh@132 -- # bits["virtmgt"]=128 00:13:10.791 20:08:08 -- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256 00:13:10.791 20:08:08 -- nvme/functions.sh@134 -- # bits["getlba"]=512 00:13:10.791 20:08:08 -- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024 00:13:10.791 20:08:08 -- nvme/functions.sh@137 -- # bit=nsmgt 00:13:10.791 20:08:08 -- nvme/functions.sh@138 -- # [[ -n 8 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs 00:13:10.791 20:08:08 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs 00:13:10.791 20:08:08 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@75 -- # [[ -n 0xe ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@76 -- # echo 0xe 00:13:10.791 20:08:08 -- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] )) 00:13:10.791 20:08:08 -- nvme/functions.sh@148 -- # echo nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@156 -- # (( 1 > 0 )) 00:13:10.791 20:08:08 -- nvme/functions.sh@157 -- # echo nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@158 -- # return 0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@22 -- # nvme_name=nvme0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@27 -- # sel_cmd=() 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@29 -- # get_oncs nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:10.791 20:08:08 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:10.791 20:08:08 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:10.791 20:08:08 -- nvme/functions.sh@75 -- # [[ -n 0x6 ]] 00:13:10.791 20:08:08 -- nvme/functions.sh@76 -- # echo 0x6 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@29 -- # (( 0x6 & 1 << 4 )) 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@33 -- # ctrlr=/dev/nvme0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@34 -- # ns=/dev/nvme0n1 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@35 -- # bdf=0000:5e:00.0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@37 -- # waitforblk nvme0n1 00:13:10.791 20:08:08 -- common/autotest_common.sh@1214 -- # local i=0 00:13:10.791 20:08:08 -- common/autotest_common.sh@1215 -- # lsblk -l -o NAME 00:13:10.791 20:08:08 -- common/autotest_common.sh@1215 -- # grep -q -w nvme0n1 00:13:10.791 20:08:08 -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME 00:13:10.791 20:08:08 -- common/autotest_common.sh@1221 -- # grep -q -w nvme0n1 00:13:10.791 20:08:08 -- common/autotest_common.sh@1225 -- # return 0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@39 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@39 -- # grep oacs 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@39 -- # cut -d: -f2 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@39 -- # oacs=' 0xe' 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@40 -- # oacs_firmware=4 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/nvme0n1 00:13:10.791 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@43 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@44 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/nvme0n1 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@46 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@47 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/nvme0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@48 -- # '[' 4 -ne 0 ']' 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@49 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/nvme0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@51 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/nvme0 00:13:11.051 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:13:11.051 critical_warning : 0 00:13:11.051 temperature : 38 °C (311 K) 00:13:11.051 available_spare : 99% 00:13:11.051 available_spare_threshold : 10% 00:13:11.051 percentage_used : 17% 00:13:11.051 endurance group critical warning summary: 0 00:13:11.051 Data Units Read : 371,113,750 (190.01 TB) 00:13:11.051 Data Units Written : 510,510,231 (261.38 TB) 00:13:11.051 host_read_commands : 22,084,650,226 00:13:11.051 host_write_commands : 25,063,408,072 00:13:11.051 controller_busy_time : 2,527 00:13:11.051 power_cycles : 28 00:13:11.051 power_on_hours : 15,505 00:13:11.051 unsafe_shutdowns : 45 00:13:11.051 media_errors : 0 00:13:11.051 num_err_log_entries : 19,598 00:13:11.051 Warning Temperature Time : 1188 00:13:11.051 Critical Composite Temperature Time : 0 00:13:11.051 Thermal Management T1 Trans Count : 0 00:13:11.051 Thermal Management T2 Trans Count : 0 00:13:11.051 Thermal Management T1 Total Time : 0 00:13:11.051 Thermal Management T2 Total Time : 0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@52 -- # /usr/local/src/nvme-cli/nvme error-log /dev/nvme0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@53 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/nvme0 -f 1 -l 100 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@54 -- # /usr/local/src/nvme-cli/nvme get-log /dev/nvme0 -i 1 -l 100 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@59 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/nvme0 -n 1 -f 2 -v 0 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@59 -- # true 00:13:11.051 20:08:08 -- cuse/spdk_nvme_cli_cuse.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:13:14.341 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:14.341 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:14.342 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:17.630 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:13:17.630 20:08:15 -- cuse/spdk_nvme_cli_cuse.sh@64 -- # spdk_tgt_pid=2111474 00:13:17.631 20:08:15 -- cuse/spdk_nvme_cli_cuse.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:13:17.631 20:08:15 -- cuse/spdk_nvme_cli_cuse.sh@65 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:17.631 20:08:15 -- cuse/spdk_nvme_cli_cuse.sh@67 -- # waitforlisten 2111474 00:13:17.631 20:08:15 -- common/autotest_common.sh@819 -- # '[' -z 2111474 ']' 00:13:17.631 20:08:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.631 20:08:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:17.631 20:08:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.631 20:08:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:17.631 20:08:15 -- common/autotest_common.sh@10 -- # set +x 00:13:17.631 [2024-04-25 20:08:15.497780] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:17.631 [2024-04-25 20:08:15.497855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111474 ] 00:13:17.631 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.890 [2024-04-25 20:08:15.603889] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:17.890 [2024-04-25 20:08:15.702682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.890 [2024-04-25 20:08:15.702879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.890 [2024-04-25 20:08:15.702891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.149 [2024-04-25 20:08:15.880865] 'OCF_Core' volume operations registered 00:13:18.149 [2024-04-25 20:08:15.884112] 'OCF_Cache' volume operations registered 00:13:18.149 [2024-04-25 20:08:15.887790] 'OCF Composite' volume operations registered 00:13:18.149 [2024-04-25 20:08:15.891049] 'SPDK_block_device' volume operations registered 00:13:18.717 20:08:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:18.717 20:08:16 -- common/autotest_common.sh@852 -- # return 0 00:13:18.717 20:08:16 -- cuse/spdk_nvme_cli_cuse.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:13:22.007 Nvme0n1 00:13:22.007 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:13:22.007 [2024-04-25 20:08:19.714063] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:22.007 [2024-04-25 20:08:19.714232] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:22.007 [2024-04-25 20:08:19.714344] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:22.007 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@72 -- # ctrlr=/dev/spdk/nvme0 00:13:22.007 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@73 -- # ns=/dev/spdk/nvme0n1 00:13:22.007 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@74 -- # waitforfile /dev/spdk/nvme0n1 00:13:22.007 20:08:19 -- common/autotest_common.sh@1244 -- # local i=0 00:13:22.007 20:08:19 -- common/autotest_common.sh@1245 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:13:22.007 20:08:19 -- common/autotest_common.sh@1251 -- # '[' '!' -e /dev/spdk/nvme0n1 ']' 00:13:22.007 20:08:19 -- common/autotest_common.sh@1255 -- # return 0 00:13:22.007 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:13:22.267 [ 00:13:22.267 { 00:13:22.267 "name": "Nvme0n1", 00:13:22.267 "aliases": [ 00:13:22.267 "4910b147-e877-4163-9b7f-a96b3b95c3ab" 00:13:22.267 ], 00:13:22.267 "product_name": "NVMe disk", 00:13:22.267 "block_size": 512, 00:13:22.267 "num_blocks": 7814037168, 00:13:22.267 "uuid": "4910b147-e877-4163-9b7f-a96b3b95c3ab", 00:13:22.267 "assigned_rate_limits": { 00:13:22.267 "rw_ios_per_sec": 0, 00:13:22.267 "rw_mbytes_per_sec": 0, 00:13:22.267 "r_mbytes_per_sec": 0, 00:13:22.267 "w_mbytes_per_sec": 0 00:13:22.267 }, 00:13:22.267 "claimed": false, 00:13:22.267 "zoned": false, 00:13:22.267 "supported_io_types": { 00:13:22.267 "read": true, 00:13:22.267 "write": true, 00:13:22.267 "unmap": true, 00:13:22.267 "write_zeroes": true, 00:13:22.267 "flush": true, 00:13:22.267 "reset": true, 00:13:22.267 "compare": false, 00:13:22.267 "compare_and_write": false, 00:13:22.267 "abort": true, 00:13:22.267 "nvme_admin": true, 00:13:22.267 "nvme_io": true 00:13:22.267 }, 00:13:22.267 "driver_specific": { 00:13:22.267 "nvme": [ 00:13:22.267 { 00:13:22.267 "pci_address": "0000:5e:00.0", 00:13:22.267 "trid": { 00:13:22.267 "trtype": "PCIe", 00:13:22.267 "traddr": "0000:5e:00.0" 00:13:22.267 }, 00:13:22.267 "cuse_device": "spdk/nvme0n1", 00:13:22.267 "ctrlr_data": { 00:13:22.267 "cntlid": 0, 00:13:22.267 "vendor_id": "0x8086", 00:13:22.267 "model_number": "INTEL SSDPE2KX040T8", 00:13:22.267 "serial_number": "BTLJ83030AK84P0DGN", 00:13:22.267 "firmware_revision": "VDV10184", 00:13:22.267 "oacs": { 00:13:22.267 "security": 0, 00:13:22.267 "format": 1, 00:13:22.267 "firmware": 1, 00:13:22.267 "ns_manage": 1 00:13:22.267 }, 00:13:22.267 "multi_ctrlr": false, 00:13:22.267 "ana_reporting": false 00:13:22.267 }, 00:13:22.267 "vs": { 00:13:22.267 "nvme_version": "1.2" 00:13:22.267 }, 00:13:22.267 "ns_data": { 00:13:22.267 "id": 1, 00:13:22.267 "can_share": false 00:13:22.267 } 00:13:22.267 } 00:13:22.267 ], 00:13:22.267 "mp_policy": "active_passive" 00:13:22.267 } 00:13:22.267 } 00:13:22.267 ] 00:13:22.267 20:08:19 -- cuse/spdk_nvme_cli_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:13:22.267 [ 00:13:22.267 { 00:13:22.267 "name": "Nvme0", 00:13:22.267 "ctrlrs": [ 00:13:22.267 { 00:13:22.267 "state": "enabled", 00:13:22.267 "cuse_device": "spdk/nvme0", 00:13:22.267 "trid": { 00:13:22.267 "trtype": "PCIe", 00:13:22.267 "traddr": "0000:5e:00.0" 00:13:22.267 }, 00:13:22.267 "cntlid": 0, 00:13:22.267 "host": { 00:13:22.267 "nqn": "nqn.2014-08.org.nvmexpress:uuid:d6dd9bdb-2454-4f64-93f8-8732d23cba50", 00:13:22.267 "addr": "", 00:13:22.267 "svcid": "" 00:13:22.267 } 00:13:22.267 } 00:13:22.267 ] 00:13:22.267 } 00:13:22.267 ] 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@79 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/spdk/nvme0n1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@80 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/spdk/nvme0n1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@81 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/spdk/nvme0n1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@83 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/spdk/nvme0 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@84 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/spdk/nvme0 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@85 -- # '[' 4 -ne 0 ']' 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@86 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/spdk/nvme0 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@88 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/spdk/nvme0 00:13:22.527 Smart Log for NVME device:nvme0 namespace-id:ffffffff 00:13:22.527 critical_warning : 0 00:13:22.527 temperature : 38 °C (311 K) 00:13:22.527 available_spare : 99% 00:13:22.527 available_spare_threshold : 10% 00:13:22.527 percentage_used : 17% 00:13:22.527 endurance group critical warning summary: 0 00:13:22.527 Data Units Read : 371,113,752 (190.01 TB) 00:13:22.527 Data Units Written : 510,510,231 (261.38 TB) 00:13:22.527 host_read_commands : 22,084,650,281 00:13:22.527 host_write_commands : 25,063,408,072 00:13:22.527 controller_busy_time : 2,527 00:13:22.527 power_cycles : 28 00:13:22.527 power_on_hours : 15,505 00:13:22.527 unsafe_shutdowns : 45 00:13:22.527 media_errors : 0 00:13:22.527 num_err_log_entries : 19,598 00:13:22.527 Warning Temperature Time : 1188 00:13:22.527 Critical Composite Temperature Time : 0 00:13:22.527 Thermal Management T1 Trans Count : 0 00:13:22.527 Thermal Management T2 Trans Count : 0 00:13:22.527 Thermal Management T1 Total Time : 0 00:13:22.527 Thermal Management T2 Total Time : 0 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@89 -- # /usr/local/src/nvme-cli/nvme error-log /dev/spdk/nvme0 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@90 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/spdk/nvme0 -f 1 -l 100 00:13:22.527 [2024-04-25 20:08:20.386513] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@91 -- # /usr/local/src/nvme-cli/nvme get-log /dev/spdk/nvme0 -i 1 -l 100 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@92 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:13:22.527 [2024-04-25 20:08:20.429249] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@93 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/spdk/nvme0 -n 1 -f 2 -v 0 00:13:22.527 [2024-04-25 20:08:20.449310] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:186 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:22.527 [2024-04-25 20:08:20.449344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT NAMESPACE SPECIFIC (01/0f) qid:0 cid:186 cdw0:0 sqhd:000d p:1 m:0 dnr:1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@93 -- # true 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 ']' 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 ']' 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.527 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 ']' 00:13:22.787 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11} 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 ']' 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@102 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@105 -- # head -c512 /dev/urandom 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@106 -- # /usr/local/src/nvme-cli/nvme write /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file 00:13:22.788 write: Success 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme read /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:13:22.788 read: Success 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@108 -- # cmp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@109 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@113 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 5 --cdw10=0x3ff0003 --cdw11=0x1 -r 00:13:22.788 Admin Command Create I/O Completion Queue is Success and result: 0x00000000 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@114 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 4 --cdw10=0x3 00:13:22.788 Admin Command Delete I/O Completion Queue is Success and result: 0x00000000 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@116 -- # [[ -c /dev/spdk/nvme0 ]] 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@117 -- # [[ -c /dev/spdk/nvme0n1 ]] 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@119 -- # trap - SIGINT SIGTERM EXIT 00:13:22.788 20:08:20 -- cuse/spdk_nvme_cli_cuse.sh@120 -- # killprocess 2111474 00:13:22.788 20:08:20 -- common/autotest_common.sh@926 -- # '[' -z 2111474 ']' 00:13:22.788 20:08:20 -- common/autotest_common.sh@930 -- # kill -0 2111474 00:13:22.788 20:08:20 -- common/autotest_common.sh@931 -- # uname 00:13:22.788 20:08:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:22.788 20:08:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2111474 00:13:23.046 20:08:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:23.046 20:08:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:23.046 20:08:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2111474' 00:13:23.046 killing process with pid 2111474 00:13:23.046 20:08:20 -- common/autotest_common.sh@945 -- # kill 2111474 00:13:23.046 20:08:20 -- common/autotest_common.sh@950 -- # wait 2111474 00:13:28.320 00:13:28.320 real 0m21.429s 00:13:28.320 user 0m21.565s 00:13:28.320 sys 0m5.904s 00:13:28.320 20:08:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.320 20:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:28.320 ************************************ 00:13:28.320 END TEST nvme_cli_cuse 00:13:28.320 ************************************ 00:13:28.320 20:08:25 -- cuse/nvme_cuse.sh@20 -- # run_test nvme_cli_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh 00:13:28.320 20:08:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:28.321 20:08:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:28.321 20:08:25 -- common/autotest_common.sh@10 -- # set +x 00:13:28.321 ************************************ 00:13:28.321 START TEST nvme_cli_plugin 00:13:28.321 ************************************ 00:13:28.321 20:08:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh 00:13:28.321 * Looking for test storage... 00:13:28.321 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:13:28.321 20:08:25 -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:13:28.321 20:08:25 -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:13:28.321 20:08:25 -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:13:28.321 20:08:25 -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:13:28.321 20:08:25 -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:13:28.321 20:08:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.321 20:08:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.321 20:08:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.321 20:08:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.321 20:08:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.321 20:08:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.321 20:08:25 -- paths/export.sh@5 -- # export PATH 00:13:28.321 20:08:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.321 20:08:25 -- nvme/functions.sh@10 -- # ctrls=() 00:13:28.321 20:08:25 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:28.321 20:08:25 -- nvme/functions.sh@11 -- # nvmes=() 00:13:28.321 20:08:25 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:28.321 20:08:25 -- nvme/functions.sh@12 -- # bdfs=() 00:13:28.321 20:08:25 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:28.321 20:08:25 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:28.321 20:08:25 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:28.321 20:08:25 -- nvme/functions.sh@14 -- # nvme_name= 00:13:28.321 20:08:25 -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@11 -- # trap 'killprocess $spdk_tgt_pid; "$rootdir/scripts/setup.sh" reset' EXIT 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@28 -- # kernel_out=() 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@29 -- # cuse_out=() 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@31 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@36 -- # export PCI_BLOCKED= 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@36 -- # PCI_BLOCKED= 00:13:28.321 20:08:25 -- cuse/spdk_nvme_cli_plugin.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:13:30.915 Waiting for block devices as requested 00:13:30.915 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:13:30.915 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:30.915 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:30.915 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:30.915 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:30.915 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:31.174 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:31.174 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:31.174 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:31.434 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:31.434 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:31.434 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:31.694 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:31.694 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:31.694 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:31.956 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:31.956 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:31.956 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@39 -- # scan_nvme_ctrls 00:13:31.956 20:08:29 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:31.956 20:08:29 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:31.956 20:08:29 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:31.956 20:08:29 -- nvme/functions.sh@49 -- # pci=0000:5e:00.0 00:13:31.956 20:08:29 -- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0 00:13:31.956 20:08:29 -- scripts/common.sh@15 -- # local i 00:13:31.957 20:08:29 -- scripts/common.sh@18 -- # [[ =~ 0000:5e:00.0 ]] 00:13:31.957 20:08:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:31.957 20:08:29 -- scripts/common.sh@24 -- # return 0 00:13:31.957 20:08:29 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:31.957 20:08:29 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:31.957 20:08:29 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@18 -- # shift 00:13:31.957 20:08:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n BTLJ83030AK84P0DGN ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN "' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN ' 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n VDV10184 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.957 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:31.957 20:08:29 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.957 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.958 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.958 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.958 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:31.959 20:08:29 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:31.959 20:08:29 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:31.959 20:08:29 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:31.959 20:08:29 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@18 -- # shift 00:13:31.959 20:08:29 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.959 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.959 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:31.959 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:31.960 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:13:31.960 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:13:31.960 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 010000000f3d00000000000000000000 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000000f3d00000000000000000000"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000000f3d00000000000000000000 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n 0000000000000f3d ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000f3d"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000f3d 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:13:32.220 20:08:29 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # IFS=: 00:13:32.220 20:08:29 -- nvme/functions.sh@21 -- # read -r reg val 00:13:32.220 20:08:29 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:32.220 20:08:29 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:32.220 20:08:29 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:32.220 20:08:29 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0 00:13:32.220 20:08:29 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:32.220 20:08:29 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@41 -- # nvme list 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@41 -- # kernel_out[0]='Node Generic SN Model Namespace Usage Format FW Rev 00:13:32.220 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:13:32.220 nvme0n1 nvme0n1 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV10184' 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@42 -- # nvme list -v 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@42 -- # kernel_out[1]='Subsystem Subsystem-NQN Controllers 00:13:32.220 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:13:32.220 nvme0 nvme0 00:13:32.220 00:13:32.220 Device SN MN FR TxPort Address Subsystem Namespaces 00:13:32.220 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:13:32.220 nvme0 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:5e:00.0 nvme0 nvme0n1 00:13:32.220 00:13:32.220 Device Generic NSID Usage Format Controllers 00:13:32.220 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:13:32.220 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@43 -- # nvme list -v -o json 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v -o json 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:32.220 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@43 -- # kernel_out[2]='{ 00:13:32.220 "Devices":[ 00:13:32.220 { 00:13:32.220 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e", 00:13:32.220 "Subsystems":[ 00:13:32.220 { 00:13:32.220 "Subsystem":"nvme0", 00:13:32.220 00:13:32.220 "Controllers":[ 00:13:32.220 { 00:13:32.220 "Controller":"nvme0", 00:13:32.220 "SerialNumber":"BTLJ83030AK84P0DGN", 00:13:32.220 "ModelNumber":"INTEL SSDPE2KX040T8", 00:13:32.220 "Firmware":"VDV10184", 00:13:32.220 "Transport":"pcie", 00:13:32.220 "Address":"0000:5e:00.0", 00:13:32.220 "Namespaces":[ 00:13:32.220 { 00:13:32.220 "NameSpace":"nvme0n1", 00:13:32.220 "Generic":"nvme0n1", 00:13:32.220 "NSID":1, 00:13:32.220 "UsedBytes":4000787030016, 00:13:32.220 "MaximumLBA":7814037168, 00:13:32.220 "PhysicalSize":4000787030016, 00:13:32.220 "SectorSize":512 00:13:32.220 } 00:13:32.220 ], 00:13:32.220 "Paths":[ 00:13:32.220 ] 00:13:32.221 } 00:13:32.221 ], 00:13:32.221 "Namespaces":[ 00:13:32.221 ] 00:13:32.221 } 00:13:32.221 ] 00:13:32.221 } 00:13:32.221 ] 00:13:32.221 }' 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@44 -- # nvme list-subsys 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list-subsys 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@44 -- # kernel_out[3]='nvme0 - 00:13:32.221 \ 00:13:32.221 +- nvme0 pcie 0000:5e:00.0 live' 00:13:32.221 20:08:29 -- cuse/spdk_nvme_cli_plugin.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:13:35.511 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:35.511 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:38.801 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:13:38.801 20:08:36 -- cuse/spdk_nvme_cli_plugin.sh@49 -- # spdk_tgt_pid=2115506 00:13:38.801 20:08:36 -- cuse/spdk_nvme_cli_plugin.sh@48 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:13:38.801 20:08:36 -- cuse/spdk_nvme_cli_plugin.sh@51 -- # waitforlisten 2115506 00:13:38.801 20:08:36 -- common/autotest_common.sh@819 -- # '[' -z 2115506 ']' 00:13:38.801 20:08:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.801 20:08:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:38.801 20:08:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.801 20:08:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:38.801 20:08:36 -- common/autotest_common.sh@10 -- # set +x 00:13:38.801 [2024-04-25 20:08:36.490141] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:13:38.801 [2024-04-25 20:08:36.490210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115506 ] 00:13:38.801 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.801 [2024-04-25 20:08:36.596985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.801 [2024-04-25 20:08:36.695319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:38.801 [2024-04-25 20:08:36.695478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.061 [2024-04-25 20:08:36.886004] 'OCF_Core' volume operations registered 00:13:39.061 [2024-04-25 20:08:36.889753] 'OCF_Cache' volume operations registered 00:13:39.061 [2024-04-25 20:08:36.893700] 'OCF Composite' volume operations registered 00:13:39.061 [2024-04-25 20:08:36.897189] 'SPDK_block_device' volume operations registered 00:13:39.636 20:08:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:39.636 20:08:37 -- common/autotest_common.sh@852 -- # return 0 00:13:39.636 20:08:37 -- cuse/spdk_nvme_cli_plugin.sh@54 -- # for ctrl in "${ordered_ctrls[@]}" 00:13:39.636 20:08:37 -- cuse/spdk_nvme_cli_plugin.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0 00:13:42.924 nvme0n1 00:13:42.924 20:08:40 -- cuse/spdk_nvme_cli_plugin.sh@56 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n nvme0 00:13:42.924 [2024-04-25 20:08:40.712828] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:13:42.924 [2024-04-25 20:08:40.712996] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:13:42.924 [2024-04-25 20:08:40.713104] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:13:42.924 20:08:40 -- cuse/spdk_nvme_cli_plugin.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:13:43.183 [ 00:13:43.183 { 00:13:43.183 "name": "nvme0n1", 00:13:43.183 "aliases": [ 00:13:43.183 "f8951333-5e4d-43fe-a812-75bf2dbffe91" 00:13:43.183 ], 00:13:43.183 "product_name": "NVMe disk", 00:13:43.183 "block_size": 512, 00:13:43.183 "num_blocks": 7814037168, 00:13:43.183 "uuid": "f8951333-5e4d-43fe-a812-75bf2dbffe91", 00:13:43.183 "assigned_rate_limits": { 00:13:43.183 "rw_ios_per_sec": 0, 00:13:43.183 "rw_mbytes_per_sec": 0, 00:13:43.183 "r_mbytes_per_sec": 0, 00:13:43.183 "w_mbytes_per_sec": 0 00:13:43.183 }, 00:13:43.183 "claimed": false, 00:13:43.183 "zoned": false, 00:13:43.183 "supported_io_types": { 00:13:43.183 "read": true, 00:13:43.183 "write": true, 00:13:43.183 "unmap": true, 00:13:43.183 "write_zeroes": true, 00:13:43.183 "flush": true, 00:13:43.183 "reset": true, 00:13:43.183 "compare": false, 00:13:43.183 "compare_and_write": false, 00:13:43.183 "abort": true, 00:13:43.183 "nvme_admin": true, 00:13:43.183 "nvme_io": true 00:13:43.183 }, 00:13:43.183 "driver_specific": { 00:13:43.183 "nvme": [ 00:13:43.183 { 00:13:43.183 "pci_address": "0000:5e:00.0", 00:13:43.183 "trid": { 00:13:43.183 "trtype": "PCIe", 00:13:43.183 "traddr": "0000:5e:00.0" 00:13:43.183 }, 00:13:43.183 "cuse_device": "spdk/nvme0n1", 00:13:43.183 "ctrlr_data": { 00:13:43.183 "cntlid": 0, 00:13:43.183 "vendor_id": "0x8086", 00:13:43.183 "model_number": "INTEL SSDPE2KX040T8", 00:13:43.183 "serial_number": "BTLJ83030AK84P0DGN", 00:13:43.183 "firmware_revision": "VDV10184", 00:13:43.183 "oacs": { 00:13:43.183 "security": 0, 00:13:43.183 "format": 1, 00:13:43.183 "firmware": 1, 00:13:43.183 "ns_manage": 1 00:13:43.183 }, 00:13:43.183 "multi_ctrlr": false, 00:13:43.183 "ana_reporting": false 00:13:43.183 }, 00:13:43.183 "vs": { 00:13:43.183 "nvme_version": "1.2" 00:13:43.183 }, 00:13:43.183 "ns_data": { 00:13:43.183 "id": 1, 00:13:43.183 "can_share": false 00:13:43.183 } 00:13:43.183 } 00:13:43.183 ], 00:13:43.183 "mp_policy": "active_passive" 00:13:43.183 } 00:13:43.183 } 00:13:43.183 ] 00:13:43.183 20:08:40 -- cuse/spdk_nvme_cli_plugin.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers 00:13:43.443 [ 00:13:43.443 { 00:13:43.443 "name": "nvme0", 00:13:43.443 "ctrlrs": [ 00:13:43.443 { 00:13:43.443 "state": "enabled", 00:13:43.443 "cuse_device": "spdk/nvme0", 00:13:43.443 "trid": { 00:13:43.443 "trtype": "PCIe", 00:13:43.443 "traddr": "0000:5e:00.0" 00:13:43.443 }, 00:13:43.443 "cntlid": 0, 00:13:43.443 "host": { 00:13:43.443 "nqn": "nqn.2014-08.org.nvmexpress:uuid:e1962900-cb42-408d-a1a8-749d5d8ca215", 00:13:43.443 "addr": "", 00:13:43.443 "svcid": "" 00:13:43.443 } 00:13:43.443 } 00:13:43.443 ] 00:13:43.443 } 00:13:43.443 ] 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@63 -- # nvme spdk list 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@63 -- # cuse_out[0]='Node Generic SN Model Namespace Usage Format FW Rev 00:13:43.443 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:13:43.443 nvme0n1 nvme0n1 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV10184' 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@64 -- # nvme spdk list -v 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@64 -- # cuse_out[1]='Subsystem Subsystem-NQN Controllers 00:13:43.443 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:13:43.443 nvme0 nvme0 00:13:43.443 00:13:43.443 Device SN MN FR TxPort Address Subsystem Namespaces 00:13:43.443 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:13:43.443 nvme0 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:5e:00.0 nvme0 nvme0n1 00:13:43.443 00:13:43.443 Device Generic NSID Usage Format Controllers 00:13:43.443 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:13:43.443 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@65 -- # nvme spdk list -v -o json 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v -o json 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:43.443 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@65 -- # cuse_out[2]='{ 00:13:43.443 "Devices":[ 00:13:43.443 { 00:13:43.443 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e", 00:13:43.443 "Subsystems":[ 00:13:43.443 { 00:13:43.443 "Subsystem":"nvme0", 00:13:43.443 00:13:43.443 "Controllers":[ 00:13:43.443 { 00:13:43.443 "Controller":"nvme0", 00:13:43.443 "SerialNumber":"BTLJ83030AK84P0DGN", 00:13:43.443 "ModelNumber":"INTEL SSDPE2KX040T8", 00:13:43.443 "Firmware":"VDV10184", 00:13:43.443 "Transport":"pcie", 00:13:43.443 "Address":"0000:5e:00.0", 00:13:43.443 "Namespaces":[ 00:13:43.443 { 00:13:43.443 "NameSpace":"nvme0n1", 00:13:43.443 "Generic":"nvme0n1", 00:13:43.443 "NSID":1, 00:13:43.443 "UsedBytes":4000787030016, 00:13:43.443 "MaximumLBA":7814037168, 00:13:43.443 "PhysicalSize":4000787030016, 00:13:43.443 "SectorSize":512 00:13:43.443 } 00:13:43.443 ], 00:13:43.443 "Paths":[ 00:13:43.443 ] 00:13:43.443 } 00:13:43.443 ], 00:13:43.443 "Namespaces":[ 00:13:43.443 ] 00:13:43.443 } 00:13:43.443 ] 00:13:43.444 } 00:13:43.444 ] 00:13:43.444 }' 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@66 -- # nvme spdk list-subsys 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@66 -- # cuse_out[3]='nvme0 - 00:13:43.444 \ 00:13:43.444 +- nvme0 pcie 0000:5e:00.0 live' 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@69 -- # nvme spdk list-subsys -v -o json 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys -v -o json 00:13:43.444 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g' 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 )) 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # trap - ERR 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@25 -- # print_backtrace 00:13:43.704 20:08:41 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:13:43.704 20:08:41 -- common/autotest_common.sh@1132 -- # return 0 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@69 -- # [[ Json output format is not supported. == \J\s\o\n\ \o\u\t\p\u\t\ \f\o\r\m\a\t\ \i\s\ \n\o\t\ \s\u\p\p\o\r\t\e\d\. ]] 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@71 -- # diff -ub /dev/fd/62 /dev/fd/61 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node Generic SN Model Namespace Usage Format FW Rev 00:13:43.704 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:13:43.704 nvme0n1 nvme0n1 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV10184' 'Subsystem Subsystem-NQN Controllers 00:13:43.704 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:13:43.704 nvme0 nvme0 00:13:43.704 00:13:43.704 Device SN MN FR TxPort Address Subsystem Namespaces 00:13:43.704 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:13:43.704 nvme0 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:5e:00.0 nvme0 nvme0n1 00:13:43.704 00:13:43.704 Device Generic NSID Usage Format Controllers 00:13:43.704 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:13:43.704 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' '{ 00:13:43.704 "Devices":[ 00:13:43.704 { 00:13:43.704 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e", 00:13:43.704 "Subsystems":[ 00:13:43.704 { 00:13:43.704 "Subsystem":"nvme0", 00:13:43.704 00:13:43.704 "Controllers":[ 00:13:43.704 { 00:13:43.704 "Controller":"nvme0", 00:13:43.704 "SerialNumber":"BTLJ83030AK84P0DGN", 00:13:43.704 "ModelNumber":"INTEL SSDPE2KX040T8", 00:13:43.704 "Firmware":"VDV10184", 00:13:43.704 "Transport":"pcie", 00:13:43.704 "Address":"0000:5e:00.0", 00:13:43.704 "Namespaces":[ 00:13:43.704 { 00:13:43.704 "NameSpace":"nvme0n1", 00:13:43.704 "Generic":"nvme0n1", 00:13:43.704 "NSID":1, 00:13:43.704 "UsedBytes":4000787030016, 00:13:43.704 "MaximumLBA":7814037168, 00:13:43.704 "PhysicalSize":4000787030016, 00:13:43.704 "SectorSize":512 00:13:43.704 } 00:13:43.704 ], 00:13:43.704 "Paths":[ 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ], 00:13:43.704 "Namespaces":[ 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ] 00:13:43.704 }' 'nvme0 - 00:13:43.704 \ 00:13:43.704 +- nvme0 pcie 0000:5e:00.0 live' 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node Generic SN Model Namespace Usage Format FW Rev 00:13:43.704 --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- -------- 00:13:43.704 nvme0n1 nvme0n1 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 0x1 4.00 TB / 4.00 TB 512 B + 0 B VDV10184' 'Subsystem Subsystem-NQN Controllers 00:13:43.704 ---------------- ------------------------------------------------------------------------------------------------ ---------------- 00:13:43.704 nvme0 nvme0 00:13:43.704 00:13:43.704 Device SN MN FR TxPort Address Subsystem Namespaces 00:13:43.704 -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ---------------- 00:13:43.704 nvme0 BTLJ83030AK84P0DGN INTEL SSDPE2KX040T8 VDV10184 pcie 0000:5e:00.0 nvme0 nvme0n1 00:13:43.704 00:13:43.704 Device Generic NSID Usage Format Controllers 00:13:43.704 ------------ ------------ ---------- -------------------------- ---------------- ---------------- 00:13:43.704 nvme0n1 nvme0n1 0x1 4.00 TB / 4.00 TB 512 B + 0 B nvme0' '{ 00:13:43.704 "Devices":[ 00:13:43.704 { 00:13:43.704 "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e", 00:13:43.704 "Subsystems":[ 00:13:43.704 { 00:13:43.704 "Subsystem":"nvme0", 00:13:43.704 00:13:43.704 "Controllers":[ 00:13:43.704 { 00:13:43.704 "Controller":"nvme0", 00:13:43.704 "SerialNumber":"BTLJ83030AK84P0DGN", 00:13:43.704 "ModelNumber":"INTEL SSDPE2KX040T8", 00:13:43.704 "Firmware":"VDV10184", 00:13:43.704 "Transport":"pcie", 00:13:43.704 "Address":"0000:5e:00.0", 00:13:43.704 "Namespaces":[ 00:13:43.704 { 00:13:43.704 "NameSpace":"nvme0n1", 00:13:43.704 "Generic":"nvme0n1", 00:13:43.704 "NSID":1, 00:13:43.704 "UsedBytes":4000787030016, 00:13:43.704 "MaximumLBA":7814037168, 00:13:43.704 "PhysicalSize":4000787030016, 00:13:43.704 "SectorSize":512 00:13:43.704 } 00:13:43.704 ], 00:13:43.704 "Paths":[ 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ], 00:13:43.704 "Namespaces":[ 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ] 00:13:43.704 } 00:13:43.704 ] 00:13:43.704 }' 'nvme0 - 00:13:43.704 \ 00:13:43.704 +- nvme0 pcie 0000:5e:00.0 live' 00:13:43.704 20:08:41 -- cuse/spdk_nvme_cli_plugin.sh@1 -- # killprocess 2115506 00:13:43.704 20:08:41 -- common/autotest_common.sh@926 -- # '[' -z 2115506 ']' 00:13:43.704 20:08:41 -- common/autotest_common.sh@930 -- # kill -0 2115506 00:13:43.704 20:08:41 -- common/autotest_common.sh@931 -- # uname 00:13:43.704 20:08:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:43.704 20:08:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2115506 00:13:43.704 20:08:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:43.704 20:08:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:43.704 20:08:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2115506' 00:13:43.704 killing process with pid 2115506 00:13:43.704 20:08:41 -- common/autotest_common.sh@945 -- # kill 2115506 00:13:43.704 20:08:41 -- common/autotest_common.sh@950 -- # wait 2115506 00:13:48.976 20:08:46 -- cuse/spdk_nvme_cli_plugin.sh@1 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:13:51.515 Waiting for block devices as requested 00:13:51.775 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:13:51.775 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:52.035 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:52.035 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:52.035 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:52.295 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:52.295 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:52.295 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:52.555 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:52.555 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:13:52.555 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:13:52.555 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:13:52.815 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:13:52.815 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:13:52.815 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:13:53.075 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:13:53.075 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:13:53.075 00:13:53.075 real 0m25.632s 00:13:53.075 user 0m12.916s 00:13:53.075 sys 0m8.001s 00:13:53.075 20:08:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.075 20:08:50 -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 ************************************ 00:13:53.075 END TEST nvme_cli_plugin 00:13:53.075 ************************************ 00:13:53.075 20:08:50 -- cuse/nvme_cuse.sh@21 -- # run_test nvme_smartctl_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh 00:13:53.075 20:08:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:53.075 20:08:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:53.075 20:08:50 -- common/autotest_common.sh@10 -- # set +x 00:13:53.075 ************************************ 00:13:53.075 START TEST nvme_smartctl_cuse 00:13:53.075 ************************************ 00:13:53.075 20:08:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh 00:13:53.335 * Looking for test storage... 00:13:53.335 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:13:53.335 20:08:51 -- cuse/spdk_smartctl_cuse.sh@11 -- # SMARTCTL_CMD='smartctl -d nvme' 00:13:53.335 20:08:51 -- cuse/spdk_smartctl_cuse.sh@12 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:13:53.335 20:08:51 -- cuse/spdk_smartctl_cuse.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:13:56.628 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:13:56.629 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:13:59.945 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:13:59.945 20:08:57 -- cuse/spdk_smartctl_cuse.sh@16 -- # get_first_nvme_bdf 00:13:59.945 20:08:57 -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:59.945 20:08:57 -- common/autotest_common.sh@1509 -- # local bdfs 00:13:59.945 20:08:57 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:59.945 20:08:57 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:59.945 20:08:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:59.945 20:08:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:13:59.945 20:08:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:59.945 20:08:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:13:59.945 20:08:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:59.945 20:08:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:13:59.945 20:08:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:13:59.945 20:08:57 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:13:59.945 20:08:57 -- cuse/spdk_smartctl_cuse.sh@16 -- # bdf=0000:5e:00.0 00:13:59.945 20:08:57 -- cuse/spdk_smartctl_cuse.sh@18 -- # PCI_ALLOWED=0000:5e:00.0 00:13:59.945 20:08:57 -- cuse/spdk_smartctl_cuse.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:14:03.264 0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0 00:14:03.264 0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1 00:14:03.264 0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2 00:14:03.264 0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3 00:14:03.264 0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4 00:14:03.264 0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5 00:14:03.264 0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6 00:14:03.264 0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7 00:14:03.264 0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0 00:14:03.264 0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1 00:14:03.264 0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2 00:14:03.264 0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3 00:14:03.264 0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4 00:14:03.264 0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5 00:14:03.264 0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6 00:14:03.265 0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7 00:14:03.265 Waiting for block devices as requested 00:14:03.265 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@19 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:14:03.265 20:09:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:14:03.265 20:09:00 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:14:03.265 20:09:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:14:03.265 20:09:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:14:03.265 20:09:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:14:03.265 20:09:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@19 -- # nvme_name=nvme0 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@20 -- # [[ -z nvme0 ]] 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@25 -- # smartctl -d nvme --json=g -a /dev/nvme0 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@25 -- # sort 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@25 -- # grep -v /dev/nvme0 00:14:03.265 20:09:00 -- cuse/spdk_smartctl_cuse.sh@25 -- # KERNEL_SMART_JSON='json = {}; 00:14:03.265 json.device = {}; 00:14:03.265 json.device.protocol = "NVMe"; 00:14:03.265 json.device.type = "nvme"; 00:14:03.265 json.firmware_version = "VDV10184"; 00:14:03.265 json.json_format_version = []; 00:14:03.265 json.json_format_version[0] = 1; 00:14:03.265 json.json_format_version[1] = 0; 00:14:03.265 json.local_time = {}; 00:14:03.265 json.local_time.asctime = "Thu Apr 25 20:09:00 2024 CEST"; 00:14:03.265 json.local_time.time_t = 1714068540; 00:14:03.265 json.model_name = "INTEL SSDPE2KX040T8"; 00:14:03.265 json.nvme_controller_id = 0; 00:14:03.265 json.nvme_error_information_log = {}; 00:14:03.265 json.nvme_error_information_log.read = 16; 00:14:03.265 json.nvme_error_information_log.size = 64; 00:14:03.265 json.nvme_error_information_log.table = []; 00:14:03.265 json.nvme_error_information_log.table[0] = {}; 00:14:03.265 json.nvme_error_information_log.table[0].error_count = 19598; 00:14:03.265 json.nvme_error_information_log.table[0].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[0].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[0].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[0].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[1] = {}; 00:14:03.265 json.nvme_error_information_log.table[10] = {}; 00:14:03.265 json.nvme_error_information_log.table[10].error_count = 19588; 00:14:03.265 json.nvme_error_information_log.table[10].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[10].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[10].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[10].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[11] = {}; 00:14:03.265 json.nvme_error_information_log.table[11].error_count = 19587; 00:14:03.265 json.nvme_error_information_log.table[11].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[11].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[11].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[11].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:14:03.265 json.nvme_error_information_log.table[12] = {}; 00:14:03.265 json.nvme_error_information_log.table[12].error_count = 19586; 00:14:03.265 json.nvme_error_information_log.table[12].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[12].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[12].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[12].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[13] = {}; 00:14:03.265 json.nvme_error_information_log.table[13].error_count = 19585; 00:14:03.265 json.nvme_error_information_log.table[13].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[13].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[13].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[13].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[14] = {}; 00:14:03.265 json.nvme_error_information_log.table[14].error_count = 19584; 00:14:03.265 json.nvme_error_information_log.table[14].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[14].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[14].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[14].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:14:03.265 json.nvme_error_information_log.table[15] = {}; 00:14:03.265 json.nvme_error_information_log.table[15].error_count = 19583; 00:14:03.265 json.nvme_error_information_log.table[15].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[15].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[15].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[15].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[1].error_count = 19597; 00:14:03.265 json.nvme_error_information_log.table[1].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[1].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[1].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[1].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[2] = {}; 00:14:03.265 json.nvme_error_information_log.table[2].error_count = 19596; 00:14:03.265 json.nvme_error_information_log.table[2].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[2].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[2].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[2].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:14:03.265 json.nvme_error_information_log.table[3] = {}; 00:14:03.265 json.nvme_error_information_log.table[3].error_count = 19595; 00:14:03.265 json.nvme_error_information_log.table[3].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[3].lba.value = 0; 00:14:03.265 json.nvme_error_information_log.table[3].phase_tag = false; 00:14:03.265 json.nvme_error_information_log.table[3].status_field = {}; 00:14:03.265 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:14:03.265 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:14:03.265 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:14:03.265 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:14:03.265 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:14:03.265 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:14:03.265 json.nvme_error_information_log.table[4] = {}; 00:14:03.265 json.nvme_error_information_log.table[4].error_count = 19594; 00:14:03.265 json.nvme_error_information_log.table[4].lba = {}; 00:14:03.265 json.nvme_error_information_log.table[4].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[4].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[4].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:14:03.266 json.nvme_error_information_log.table[5] = {}; 00:14:03.266 json.nvme_error_information_log.table[5].error_count = 19593; 00:14:03.266 json.nvme_error_information_log.table[5].lba = {}; 00:14:03.266 json.nvme_error_information_log.table[5].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[5].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[5].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:14:03.266 json.nvme_error_information_log.table[6] = {}; 00:14:03.266 json.nvme_error_information_log.table[6].error_count = 19592; 00:14:03.266 json.nvme_error_information_log.table[6].lba = {}; 00:14:03.266 json.nvme_error_information_log.table[6].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[6].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[6].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:14:03.266 json.nvme_error_information_log.table[7] = {}; 00:14:03.266 json.nvme_error_information_log.table[7].error_count = 19591; 00:14:03.266 json.nvme_error_information_log.table[7].lba = {}; 00:14:03.266 json.nvme_error_information_log.table[7].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[7].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[7].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:14:03.266 json.nvme_error_information_log.table[8] = {}; 00:14:03.266 json.nvme_error_information_log.table[8].error_count = 19590; 00:14:03.266 json.nvme_error_information_log.table[8].lba = {}; 00:14:03.266 json.nvme_error_information_log.table[8].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[8].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[8].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:14:03.266 json.nvme_error_information_log.table[9] = {}; 00:14:03.266 json.nvme_error_information_log.table[9].error_count = 19589; 00:14:03.266 json.nvme_error_information_log.table[9].lba = {}; 00:14:03.266 json.nvme_error_information_log.table[9].lba.value = 0; 00:14:03.266 json.nvme_error_information_log.table[9].phase_tag = false; 00:14:03.266 json.nvme_error_information_log.table[9].status_field = {}; 00:14:03.266 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:14:03.266 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:14:03.266 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:14:03.266 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:14:03.266 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:14:03.266 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:14:03.266 json.nvme_error_information_log.unread = 48; 00:14:03.266 json.nvme_ieee_oui_identifier = 6083300; 00:14:03.266 json.nvme_number_of_namespaces = 128; 00:14:03.266 json.nvme_pci_vendor = {}; 00:14:03.266 json.nvme_pci_vendor.id = 32902; 00:14:03.266 json.nvme_pci_vendor.subsystem_id = 32902; 00:14:03.266 json.nvme_smart_health_information_log = {}; 00:14:03.266 json.nvme_smart_health_information_log.available_spare = 99; 00:14:03.266 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:14:03.266 json.nvme_smart_health_information_log.controller_busy_time = 2527; 00:14:03.266 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:14:03.266 json.nvme_smart_health_information_log.critical_warning = 0; 00:14:03.266 json.nvme_smart_health_information_log.data_units_read = 371113763; 00:14:03.266 json.nvme_smart_health_information_log.data_units_written = 510510231; 00:14:03.266 json.nvme_smart_health_information_log.host_reads = 22084650541; 00:14:03.266 json.nvme_smart_health_information_log.host_writes = 25063408073; 00:14:03.266 json.nvme_smart_health_information_log.media_errors = 0; 00:14:03.266 json.nvme_smart_health_information_log.num_err_log_entries = 19598; 00:14:03.266 json.nvme_smart_health_information_log.percentage_used = 17; 00:14:03.266 json.nvme_smart_health_information_log.power_cycles = 28; 00:14:03.266 json.nvme_smart_health_information_log.power_on_hours = 15505; 00:14:03.266 json.nvme_smart_health_information_log.temperature = 38; 00:14:03.266 json.nvme_smart_health_information_log.unsafe_shutdowns = 45; 00:14:03.266 json.nvme_smart_health_information_log.warning_temp_time = 1188; 00:14:03.266 json.nvme_total_capacity = 4000787030016; 00:14:03.266 json.nvme_unallocated_capacity = 0; 00:14:03.266 json.nvme_version = {}; 00:14:03.266 json.nvme_version.string = "1.2"; 00:14:03.266 json.nvme_version.value = 66048; 00:14:03.266 json.power_cycle_count = 28; 00:14:03.266 json.power_on_time = {}; 00:14:03.266 json.power_on_time.hours = 15505; 00:14:03.266 json.serial_number = "BTLJ83030AK84P0DGN"; 00:14:03.266 json.smartctl = {}; 00:14:03.266 json.smartctl.argv = []; 00:14:03.266 json.smartctl.argv[0] = "smartctl"; 00:14:03.266 json.smartctl.argv[1] = "-d"; 00:14:03.266 json.smartctl.argv[2] = "nvme"; 00:14:03.266 json.smartctl.argv[3] = "--json=g"; 00:14:03.266 json.smartctl.argv[4] = "-a"; 00:14:03.266 json.smartctl.build_info = "(local build)"; 00:14:03.266 json.smartctl.exit_status = 0; 00:14:03.266 json.smartctl.platform_info = "x86_64-linux-6.7.0-68.fc38.x86_64"; 00:14:03.266 json.smartctl.pre_release = false; 00:14:03.266 json.smartctl.svn_revision = "5530"; 00:14:03.266 json.smartctl.version = []; 00:14:03.266 json.smartctl.version[0] = 7; 00:14:03.266 json.smartctl.version[1] = 4; 00:14:03.266 json.smart_status = {}; 00:14:03.266 json.smart_status.nvme = {}; 00:14:03.266 json.smart_status.nvme.value = 0; 00:14:03.266 json.smart_status.passed = true; 00:14:03.266 json.smart_support = {}; 00:14:03.266 json.smart_support.available = true; 00:14:03.266 json.smart_support.enabled = true; 00:14:03.266 json.temperature = {}; 00:14:03.266 json.temperature.current = 38;' 00:14:03.266 20:09:00 -- cuse/spdk_smartctl_cuse.sh@27 -- # smartctl -d nvme -i /dev/nvme0n1 00:14:03.266 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:03.266 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:03.266 00:14:03.266 === START OF INFORMATION SECTION === 00:14:03.266 Model Number: INTEL SSDPE2KX040T8 00:14:03.266 Serial Number: BTLJ83030AK84P0DGN 00:14:03.266 Firmware Version: VDV10184 00:14:03.266 PCI Vendor/Subsystem ID: 0x8086 00:14:03.266 IEEE OUI Identifier: 0x5cd2e4 00:14:03.266 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:14:03.266 Unallocated NVM Capacity: 0 00:14:03.266 Controller ID: 0 00:14:03.266 NVMe Version: 1.2 00:14:03.266 Number of Namespaces: 128 00:14:03.266 Namespace 1 Size/Capacity: 4,000,787,030,016 [4.00 TB] 00:14:03.266 Namespace 1 Formatted LBA Size: 512 00:14:03.266 Namespace 1 IEEE EUI-64: 000000 0000000f3d 00:14:03.266 Local Time is: Thu Apr 25 20:09:00 2024 CEST 00:14:03.266 00:14:03.266 20:09:00 -- cuse/spdk_smartctl_cuse.sh@30 -- # smartctl -d nvme -l error /dev/nvme0 00:14:03.266 20:09:00 -- cuse/spdk_smartctl_cuse.sh@30 -- # KERNEL_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:03.266 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:03.266 00:14:03.266 === START OF SMART DATA SECTION === 00:14:03.266 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:14:03.266 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:14:03.266 0 19598 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 1 19597 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 2 19596 0 - 0xc00c - 0 - - Internal Error 00:14:03.266 3 19595 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 4 19594 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 5 19593 0 - 0xc00c - 0 - - Internal Error 00:14:03.266 6 19592 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 7 19591 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 8 19590 0 - 0xc00c - 0 - - Internal Error 00:14:03.266 9 19589 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 10 19588 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 11 19587 0 - 0xc00c - 0 - - Internal Error 00:14:03.266 12 19586 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 13 19585 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 14 19584 0 - 0xc00c - 0 - - Internal Error 00:14:03.266 15 19583 2 - 0xc00c - 0 - - Internal Error 00:14:03.266 ... (48 entries not read)' 00:14:03.267 20:09:00 -- cuse/spdk_smartctl_cuse.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:14:06.555 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:14:06.555 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:14:09.848 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:14:09.848 20:09:07 -- cuse/spdk_smartctl_cuse.sh@35 -- # spdk_tgt_pid=2122594 00:14:09.848 20:09:07 -- cuse/spdk_smartctl_cuse.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:14:09.848 20:09:07 -- cuse/spdk_smartctl_cuse.sh@36 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:14:09.849 20:09:07 -- cuse/spdk_smartctl_cuse.sh@38 -- # waitforlisten 2122594 00:14:09.849 20:09:07 -- common/autotest_common.sh@819 -- # '[' -z 2122594 ']' 00:14:09.849 20:09:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.849 20:09:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:09.849 20:09:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.849 20:09:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:09.849 20:09:07 -- common/autotest_common.sh@10 -- # set +x 00:14:09.849 [2024-04-25 20:09:07.390180] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:09.849 [2024-04-25 20:09:07.390256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122594 ] 00:14:09.849 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.849 [2024-04-25 20:09:07.494916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:09.849 [2024-04-25 20:09:07.596226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:09.849 [2024-04-25 20:09:07.596414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.849 [2024-04-25 20:09:07.596419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.108 [2024-04-25 20:09:07.810778] 'OCF_Core' volume operations registered 00:14:10.108 [2024-04-25 20:09:07.814267] 'OCF_Cache' volume operations registered 00:14:10.108 [2024-04-25 20:09:07.818236] 'OCF Composite' volume operations registered 00:14:10.108 [2024-04-25 20:09:07.821763] 'SPDK_block_device' volume operations registered 00:14:10.676 20:09:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:10.676 20:09:08 -- common/autotest_common.sh@852 -- # return 0 00:14:10.676 20:09:08 -- cuse/spdk_smartctl_cuse.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:14:13.965 Nvme0n1 00:14:13.965 20:09:11 -- cuse/spdk_smartctl_cuse.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:14:13.966 [2024-04-25 20:09:11.609604] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:14:13.966 [2024-04-25 20:09:11.609790] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:14:13.966 [2024-04-25 20:09:11.609910] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:14:13.966 20:09:11 -- cuse/spdk_smartctl_cuse.sh@43 -- # sleep 5 00:14:19.251 20:09:16 -- cuse/spdk_smartctl_cuse.sh@45 -- # '[' '!' -c /dev/spdk/nvme0 ']' 00:14:19.251 20:09:16 -- cuse/spdk_smartctl_cuse.sh@49 -- # grep -v /dev/spdk/nvme0 00:14:19.251 20:09:16 -- cuse/spdk_smartctl_cuse.sh@49 -- # smartctl -d nvme --json=g -a /dev/spdk/nvme0 00:14:19.251 20:09:16 -- cuse/spdk_smartctl_cuse.sh@49 -- # sort 00:14:19.251 [2024-04-25 20:09:16.662073] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.251 20:09:16 -- cuse/spdk_smartctl_cuse.sh@49 -- # CUSE_SMART_JSON='json = {}; 00:14:19.251 json.device = {}; 00:14:19.251 json.device.protocol = "NVMe"; 00:14:19.251 json.device.type = "nvme"; 00:14:19.251 json.firmware_version = "VDV10184"; 00:14:19.251 json.json_format_version = []; 00:14:19.251 json.json_format_version[0] = 1; 00:14:19.251 json.json_format_version[1] = 0; 00:14:19.251 json.local_time = {}; 00:14:19.251 json.local_time.asctime = "Thu Apr 25 20:09:16 2024 CEST"; 00:14:19.251 json.local_time.time_t = 1714068556; 00:14:19.251 json.model_name = "INTEL SSDPE2KX040T8"; 00:14:19.251 json.nvme_controller_id = 0; 00:14:19.251 json.nvme_error_information_log = {}; 00:14:19.251 json.nvme_error_information_log.read = 16; 00:14:19.251 json.nvme_error_information_log.size = 64; 00:14:19.251 json.nvme_error_information_log.table = []; 00:14:19.251 json.nvme_error_information_log.table[0] = {}; 00:14:19.251 json.nvme_error_information_log.table[0].error_count = 19598; 00:14:19.251 json.nvme_error_information_log.table[0].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[0].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[0].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[0].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[1] = {}; 00:14:19.252 json.nvme_error_information_log.table[10] = {}; 00:14:19.252 json.nvme_error_information_log.table[10].error_count = 19588; 00:14:19.252 json.nvme_error_information_log.table[10].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[10].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[10].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[10].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[11] = {}; 00:14:19.252 json.nvme_error_information_log.table[11].error_count = 19587; 00:14:19.252 json.nvme_error_information_log.table[11].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[11].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[11].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[11].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:14:19.252 json.nvme_error_information_log.table[12] = {}; 00:14:19.252 json.nvme_error_information_log.table[12].error_count = 19586; 00:14:19.252 json.nvme_error_information_log.table[12].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[12].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[12].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[12].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[13] = {}; 00:14:19.252 json.nvme_error_information_log.table[13].error_count = 19585; 00:14:19.252 json.nvme_error_information_log.table[13].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[13].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[13].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[13].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[14] = {}; 00:14:19.252 json.nvme_error_information_log.table[14].error_count = 19584; 00:14:19.252 json.nvme_error_information_log.table[14].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[14].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[14].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[14].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:14:19.252 json.nvme_error_information_log.table[15] = {}; 00:14:19.252 json.nvme_error_information_log.table[15].error_count = 19583; 00:14:19.252 json.nvme_error_information_log.table[15].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[15].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[15].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[15].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[1].error_count = 19597; 00:14:19.252 json.nvme_error_information_log.table[1].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[1].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[1].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[1].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[2] = {}; 00:14:19.252 json.nvme_error_information_log.table[2].error_count = 19596; 00:14:19.252 json.nvme_error_information_log.table[2].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[2].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[2].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[2].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:14:19.252 json.nvme_error_information_log.table[3] = {}; 00:14:19.252 json.nvme_error_information_log.table[3].error_count = 19595; 00:14:19.252 json.nvme_error_information_log.table[3].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[3].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[3].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[3].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[4] = {}; 00:14:19.252 json.nvme_error_information_log.table[4].error_count = 19594; 00:14:19.252 json.nvme_error_information_log.table[4].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[4].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[4].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[4].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[5] = {}; 00:14:19.252 json.nvme_error_information_log.table[5].error_count = 19593; 00:14:19.252 json.nvme_error_information_log.table[5].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[5].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[5].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[5].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:14:19.252 json.nvme_error_information_log.table[6] = {}; 00:14:19.252 json.nvme_error_information_log.table[6].error_count = 19592; 00:14:19.252 json.nvme_error_information_log.table[6].lba = {}; 00:14:19.252 json.nvme_error_information_log.table[6].lba.value = 0; 00:14:19.252 json.nvme_error_information_log.table[6].phase_tag = false; 00:14:19.252 json.nvme_error_information_log.table[6].status_field = {}; 00:14:19.252 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:14:19.252 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:14:19.252 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:14:19.252 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:14:19.252 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:14:19.252 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:14:19.252 json.nvme_error_information_log.table[7] = {}; 00:14:19.252 json.nvme_error_information_log.table[7].error_count = 19591; 00:14:19.253 json.nvme_error_information_log.table[7].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[7].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[7].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[7].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:14:19.253 json.nvme_error_information_log.table[8] = {}; 00:14:19.253 json.nvme_error_information_log.table[8].error_count = 19590; 00:14:19.253 json.nvme_error_information_log.table[8].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[8].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[8].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[8].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:14:19.253 json.nvme_error_information_log.table[9] = {}; 00:14:19.253 json.nvme_error_information_log.table[9].error_count = 19589; 00:14:19.253 json.nvme_error_information_log.table[9].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[9].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[9].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[9].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:14:19.253 json.nvme_error_information_log.unread = 48; 00:14:19.253 json.nvme_ieee_oui_identifier = 6083300; 00:14:19.253 json.nvme_number_of_namespaces = 128; 00:14:19.253 json.nvme_pci_vendor = {}; 00:14:19.253 json.nvme_pci_vendor.id = 32902; 00:14:19.253 json.nvme_pci_vendor.subsystem_id = 32902; 00:14:19.253 json.nvme_smart_health_information_log = {}; 00:14:19.253 json.nvme_smart_health_information_log.available_spare = 99; 00:14:19.253 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:14:19.253 json.nvme_smart_health_information_log.controller_busy_time = 2527; 00:14:19.253 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:14:19.253 json.nvme_smart_health_information_log.critical_warning = 0; 00:14:19.253 json.nvme_smart_health_information_log.data_units_read = 371113765; 00:14:19.253 json.nvme_smart_health_information_log.data_units_written = 510510231; 00:14:19.253 json.nvme_smart_health_information_log.host_reads = 22084650596; 00:14:19.253 json.nvme_smart_health_information_log.host_writes = 25063408073; 00:14:19.253 json.nvme_smart_health_information_log.media_errors = 0; 00:14:19.253 json.nvme_smart_health_information_log.num_err_log_entries = 19598; 00:14:19.253 json.nvme_smart_health_information_log.percentage_used = 17; 00:14:19.253 json.nvme_smart_health_information_log.power_cycles = 28; 00:14:19.253 json.nvme_smart_health_information_log.power_on_hours = 15505; 00:14:19.253 json.nvme_smart_health_information_log.temperature = 38; 00:14:19.253 json.nvme_smart_health_information_log.unsafe_shutdowns = 45; 00:14:19.253 json.nvme_smart_health_information_log.warning_temp_time = 1188; 00:14:19.253 json.nvme_total_capacity = 4000787030016; 00:14:19.253 json.nvme_unallocated_capacity = 0; 00:14:19.253 json.nvme_version = {}; 00:14:19.253 json.nvme_version.string = "1.2"; 00:14:19.253 json.nvme_version.value = 66048; 00:14:19.253 json.power_cycle_count = 28; 00:14:19.253 json.power_on_time = {}; 00:14:19.253 json.power_on_time.hours = 15505; 00:14:19.253 json.serial_number = "BTLJ83030AK84P0DGN"; 00:14:19.253 json.smartctl = {}; 00:14:19.253 json.smartctl.argv = []; 00:14:19.253 json.smartctl.argv[0] = "smartctl"; 00:14:19.253 json.smartctl.argv[1] = "-d"; 00:14:19.253 json.smartctl.argv[2] = "nvme"; 00:14:19.253 json.smartctl.argv[3] = "--json=g"; 00:14:19.253 json.smartctl.argv[4] = "-a"; 00:14:19.253 json.smartctl.build_info = "(local build)"; 00:14:19.253 json.smartctl.exit_status = 0; 00:14:19.253 json.smartctl.platform_info = "x86_64-linux-6.7.0-68.fc38.x86_64"; 00:14:19.253 json.smartctl.pre_release = false; 00:14:19.253 json.smartctl.svn_revision = "5530"; 00:14:19.253 json.smartctl.version = []; 00:14:19.253 json.smartctl.version[0] = 7; 00:14:19.253 json.smartctl.version[1] = 4; 00:14:19.253 json.smart_status = {}; 00:14:19.253 json.smart_status.nvme = {}; 00:14:19.253 json.smart_status.nvme.value = 0; 00:14:19.253 json.smart_status.passed = true; 00:14:19.253 json.smart_support = {}; 00:14:19.253 json.smart_support.available = true; 00:14:19.253 json.smart_support.enabled = true; 00:14:19.253 json.temperature = {}; 00:14:19.253 json.temperature.current = 38;' 00:14:19.253 20:09:16 -- cuse/spdk_smartctl_cuse.sh@51 -- # diff '--changed-group-format=%<' --unchanged-group-format= /dev/fd/62 /dev/fd/61 00:14:19.253 20:09:16 -- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {}; 00:14:19.253 json.device = {}; 00:14:19.253 json.device.protocol = "NVMe"; 00:14:19.253 json.device.type = "nvme"; 00:14:19.253 json.firmware_version = "VDV10184"; 00:14:19.253 json.json_format_version = []; 00:14:19.253 json.json_format_version[0] = 1; 00:14:19.253 json.json_format_version[1] = 0; 00:14:19.253 json.local_time = {}; 00:14:19.253 json.local_time.asctime = "Thu Apr 25 20:09:00 2024 CEST"; 00:14:19.253 json.local_time.time_t = 1714068540; 00:14:19.253 json.model_name = "INTEL SSDPE2KX040T8"; 00:14:19.253 json.nvme_controller_id = 0; 00:14:19.253 json.nvme_error_information_log = {}; 00:14:19.253 json.nvme_error_information_log.read = 16; 00:14:19.253 json.nvme_error_information_log.size = 64; 00:14:19.253 json.nvme_error_information_log.table = []; 00:14:19.253 json.nvme_error_information_log.table[0] = {}; 00:14:19.253 json.nvme_error_information_log.table[0].error_count = 19598; 00:14:19.253 json.nvme_error_information_log.table[0].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[0].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[0].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[0].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:14:19.253 json.nvme_error_information_log.table[1] = {}; 00:14:19.253 json.nvme_error_information_log.table[10] = {}; 00:14:19.253 json.nvme_error_information_log.table[10].error_count = 19588; 00:14:19.253 json.nvme_error_information_log.table[10].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[10].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[10].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[10].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:14:19.253 json.nvme_error_information_log.table[11] = {}; 00:14:19.253 json.nvme_error_information_log.table[11].error_count = 19587; 00:14:19.253 json.nvme_error_information_log.table[11].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[11].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[11].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[11].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:14:19.253 json.nvme_error_information_log.table[12] = {}; 00:14:19.253 json.nvme_error_information_log.table[12].error_count = 19586; 00:14:19.253 json.nvme_error_information_log.table[12].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[12].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[12].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[12].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:14:19.253 json.nvme_error_information_log.table[13] = {}; 00:14:19.253 json.nvme_error_information_log.table[13].error_count = 19585; 00:14:19.253 json.nvme_error_information_log.table[13].lba = {}; 00:14:19.253 json.nvme_error_information_log.table[13].lba.value = 0; 00:14:19.253 json.nvme_error_information_log.table[13].phase_tag = false; 00:14:19.253 json.nvme_error_information_log.table[13].status_field = {}; 00:14:19.253 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:14:19.253 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:14:19.253 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:14:19.253 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:14:19.253 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:14:19.253 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[14] = {}; 00:14:19.254 json.nvme_error_information_log.table[14].error_count = 19584; 00:14:19.254 json.nvme_error_information_log.table[14].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[14].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[14].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[14].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:14:19.254 json.nvme_error_information_log.table[15] = {}; 00:14:19.254 json.nvme_error_information_log.table[15].error_count = 19583; 00:14:19.254 json.nvme_error_information_log.table[15].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[15].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[15].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[15].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[1].error_count = 19597; 00:14:19.254 json.nvme_error_information_log.table[1].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[1].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[1].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[1].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[2] = {}; 00:14:19.254 json.nvme_error_information_log.table[2].error_count = 19596; 00:14:19.254 json.nvme_error_information_log.table[2].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[2].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[2].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[2].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:14:19.254 json.nvme_error_information_log.table[3] = {}; 00:14:19.254 json.nvme_error_information_log.table[3].error_count = 19595; 00:14:19.254 json.nvme_error_information_log.table[3].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[3].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[3].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[3].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[4] = {}; 00:14:19.254 json.nvme_error_information_log.table[4].error_count = 19594; 00:14:19.254 json.nvme_error_information_log.table[4].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[4].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[4].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[4].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[5] = {}; 00:14:19.254 json.nvme_error_information_log.table[5].error_count = 19593; 00:14:19.254 json.nvme_error_information_log.table[5].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[5].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[5].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[5].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:14:19.254 json.nvme_error_information_log.table[6] = {}; 00:14:19.254 json.nvme_error_information_log.table[6].error_count = 19592; 00:14:19.254 json.nvme_error_information_log.table[6].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[6].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[6].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[6].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[7] = {}; 00:14:19.254 json.nvme_error_information_log.table[7].error_count = 19591; 00:14:19.254 json.nvme_error_information_log.table[7].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[7].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[7].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[7].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.table[8] = {}; 00:14:19.254 json.nvme_error_information_log.table[8].error_count = 19590; 00:14:19.254 json.nvme_error_information_log.table[8].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[8].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[8].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[8].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:14:19.254 json.nvme_error_information_log.table[9] = {}; 00:14:19.254 json.nvme_error_information_log.table[9].error_count = 19589; 00:14:19.254 json.nvme_error_information_log.table[9].lba = {}; 00:14:19.254 json.nvme_error_information_log.table[9].lba.value = 0; 00:14:19.254 json.nvme_error_information_log.table[9].phase_tag = false; 00:14:19.254 json.nvme_error_information_log.table[9].status_field = {}; 00:14:19.254 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:14:19.254 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:14:19.254 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:14:19.254 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:14:19.254 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:14:19.254 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:14:19.254 json.nvme_error_information_log.unread = 48; 00:14:19.254 json.nvme_ieee_oui_identifier = 6083300; 00:14:19.254 json.nvme_number_of_namespaces = 128; 00:14:19.254 json.nvme_pci_vendor = {}; 00:14:19.254 json.nvme_pci_vendor.id = 32902; 00:14:19.254 json.nvme_pci_vendor.subsystem_id = 32902; 00:14:19.254 json.nvme_smart_health_information_log = {}; 00:14:19.254 json.nvme_smart_health_information_log.available_spare = 99; 00:14:19.254 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:14:19.254 json.nvme_smart_health_information_log.controller_busy_time = 2527; 00:14:19.254 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:14:19.254 json.nvme_smart_health_information_log.critical_warning = 0; 00:14:19.254 json.nvme_smart_health_information_log.data_units_read = 371113763; 00:14:19.254 json.nvme_smart_health_information_log.data_units_written = 510510231; 00:14:19.254 json.nvme_smart_health_information_log.host_reads = 22084650541; 00:14:19.254 json.nvme_smart_health_information_log.host_writes = 25063408073; 00:14:19.254 json.nvme_smart_health_information_log.media_errors = 0; 00:14:19.254 json.nvme_smart_health_information_log.num_err_log_entries = 19598; 00:14:19.254 json.nvme_smart_health_information_log.percentage_used = 17; 00:14:19.254 json.nvme_smart_health_information_log.power_cycles = 28; 00:14:19.254 json.nvme_smart_health_information_log.power_on_hours = 15505; 00:14:19.254 json.nvme_smart_health_information_log.temperature = 38; 00:14:19.254 json.nvme_smart_health_information_log.unsafe_shutdowns = 45; 00:14:19.254 json.nvme_smart_health_information_log.warning_temp_time = 1188; 00:14:19.254 json.nvme_total_capacity = 4000787030016; 00:14:19.254 json.nvme_unallocated_capacity = 0; 00:14:19.254 json.nvme_version = {}; 00:14:19.254 json.nvme_version.string = "1.2"; 00:14:19.255 json.nvme_version.value = 66048; 00:14:19.255 json.power_cycle_count = 28; 00:14:19.255 json.power_on_time = {}; 00:14:19.255 json.power_on_time.hours = 15505; 00:14:19.255 json.serial_number = "BTLJ83030AK84P0DGN"; 00:14:19.255 json.smartctl = {}; 00:14:19.255 json.smartctl.argv = []; 00:14:19.255 json.smartctl.argv[0] = "smartctl"; 00:14:19.255 json.smartctl.argv[1] = "-d"; 00:14:19.255 json.smartctl.argv[2] = "nvme"; 00:14:19.255 json.smartctl.argv[3] = "--json=g"; 00:14:19.255 json.smartctl.argv[4] = "-a"; 00:14:19.255 json.smartctl.build_info = "(local build)"; 00:14:19.255 json.smartctl.exit_status = 0; 00:14:19.255 json.smartctl.platform_info = "x86_64-linux-6.7.0-68.fc38.x86_64"; 00:14:19.255 json.smartctl.pre_release = false; 00:14:19.255 json.smartctl.svn_revision = "5530"; 00:14:19.255 json.smartctl.version = []; 00:14:19.255 json.smartctl.version[0] = 7; 00:14:19.255 json.smartctl.version[1] = 4; 00:14:19.255 json.smart_status = {}; 00:14:19.255 json.smart_status.nvme = {}; 00:14:19.255 json.smart_status.nvme.value = 0; 00:14:19.255 json.smart_status.passed = true; 00:14:19.255 json.smart_support = {}; 00:14:19.255 json.smart_support.available = true; 00:14:19.255 json.smart_support.enabled = true; 00:14:19.255 json.temperature = {}; 00:14:19.255 json.temperature.current = 38;' 00:14:19.255 20:09:16 -- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {}; 00:14:19.255 json.device = {}; 00:14:19.255 json.device.protocol = "NVMe"; 00:14:19.255 json.device.type = "nvme"; 00:14:19.255 json.firmware_version = "VDV10184"; 00:14:19.255 json.json_format_version = []; 00:14:19.255 json.json_format_version[0] = 1; 00:14:19.255 json.json_format_version[1] = 0; 00:14:19.255 json.local_time = {}; 00:14:19.255 json.local_time.asctime = "Thu Apr 25 20:09:16 2024 CEST"; 00:14:19.255 json.local_time.time_t = 1714068556; 00:14:19.255 json.model_name = "INTEL SSDPE2KX040T8"; 00:14:19.255 json.nvme_controller_id = 0; 00:14:19.255 json.nvme_error_information_log = {}; 00:14:19.255 json.nvme_error_information_log.read = 16; 00:14:19.255 json.nvme_error_information_log.size = 64; 00:14:19.255 json.nvme_error_information_log.table = []; 00:14:19.255 json.nvme_error_information_log.table[0] = {}; 00:14:19.255 json.nvme_error_information_log.table[0].error_count = 19598; 00:14:19.255 json.nvme_error_information_log.table[0].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[0].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[0].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[0].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[0].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[0].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[0].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[0].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[0].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[0].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[1] = {}; 00:14:19.255 json.nvme_error_information_log.table[10] = {}; 00:14:19.255 json.nvme_error_information_log.table[10].error_count = 19588; 00:14:19.255 json.nvme_error_information_log.table[10].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[10].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[10].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[10].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[10].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[10].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[10].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[10].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[10].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[10].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[11] = {}; 00:14:19.255 json.nvme_error_information_log.table[11].error_count = 19587; 00:14:19.255 json.nvme_error_information_log.table[11].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[11].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[11].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[11].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[11].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[11].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[11].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[11].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[11].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[11].submission_queue_id = 0; 00:14:19.255 json.nvme_error_information_log.table[12] = {}; 00:14:19.255 json.nvme_error_information_log.table[12].error_count = 19586; 00:14:19.255 json.nvme_error_information_log.table[12].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[12].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[12].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[12].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[12].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[12].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[12].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[12].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[12].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[12].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[13] = {}; 00:14:19.255 json.nvme_error_information_log.table[13].error_count = 19585; 00:14:19.255 json.nvme_error_information_log.table[13].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[13].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[13].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[13].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[13].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[13].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[13].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[13].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[13].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[13].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[14] = {}; 00:14:19.255 json.nvme_error_information_log.table[14].error_count = 19584; 00:14:19.255 json.nvme_error_information_log.table[14].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[14].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[14].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[14].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[14].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[14].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[14].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[14].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[14].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[14].submission_queue_id = 0; 00:14:19.255 json.nvme_error_information_log.table[15] = {}; 00:14:19.255 json.nvme_error_information_log.table[15].error_count = 19583; 00:14:19.255 json.nvme_error_information_log.table[15].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[15].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[15].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[15].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[15].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[15].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[15].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[15].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[15].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[15].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[1].error_count = 19597; 00:14:19.255 json.nvme_error_information_log.table[1].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[1].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[1].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[1].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[1].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[1].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[1].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[1].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[1].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[1].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[2] = {}; 00:14:19.255 json.nvme_error_information_log.table[2].error_count = 19596; 00:14:19.255 json.nvme_error_information_log.table[2].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[2].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[2].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[2].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[2].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[2].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[2].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[2].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[2].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[2].submission_queue_id = 0; 00:14:19.255 json.nvme_error_information_log.table[3] = {}; 00:14:19.255 json.nvme_error_information_log.table[3].error_count = 19595; 00:14:19.255 json.nvme_error_information_log.table[3].lba = {}; 00:14:19.255 json.nvme_error_information_log.table[3].lba.value = 0; 00:14:19.255 json.nvme_error_information_log.table[3].phase_tag = false; 00:14:19.255 json.nvme_error_information_log.table[3].status_field = {}; 00:14:19.255 json.nvme_error_information_log.table[3].status_field.do_not_retry = true; 00:14:19.255 json.nvme_error_information_log.table[3].status_field.status_code = 6; 00:14:19.255 json.nvme_error_information_log.table[3].status_field.status_code_type = 0; 00:14:19.255 json.nvme_error_information_log.table[3].status_field.string = "Internal Error"; 00:14:19.255 json.nvme_error_information_log.table[3].status_field.value = 24582; 00:14:19.255 json.nvme_error_information_log.table[3].submission_queue_id = 2; 00:14:19.255 json.nvme_error_information_log.table[4] = {}; 00:14:19.256 json.nvme_error_information_log.table[4].error_count = 19594; 00:14:19.256 json.nvme_error_information_log.table[4].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[4].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[4].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[4].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[4].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[4].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[4].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[4].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[4].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[4].submission_queue_id = 2; 00:14:19.256 json.nvme_error_information_log.table[5] = {}; 00:14:19.256 json.nvme_error_information_log.table[5].error_count = 19593; 00:14:19.256 json.nvme_error_information_log.table[5].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[5].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[5].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[5].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[5].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[5].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[5].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[5].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[5].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[5].submission_queue_id = 0; 00:14:19.256 json.nvme_error_information_log.table[6] = {}; 00:14:19.256 json.nvme_error_information_log.table[6].error_count = 19592; 00:14:19.256 json.nvme_error_information_log.table[6].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[6].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[6].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[6].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[6].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[6].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[6].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[6].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[6].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[6].submission_queue_id = 2; 00:14:19.256 json.nvme_error_information_log.table[7] = {}; 00:14:19.256 json.nvme_error_information_log.table[7].error_count = 19591; 00:14:19.256 json.nvme_error_information_log.table[7].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[7].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[7].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[7].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[7].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[7].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[7].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[7].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[7].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[7].submission_queue_id = 2; 00:14:19.256 json.nvme_error_information_log.table[8] = {}; 00:14:19.256 json.nvme_error_information_log.table[8].error_count = 19590; 00:14:19.256 json.nvme_error_information_log.table[8].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[8].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[8].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[8].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[8].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[8].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[8].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[8].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[8].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[8].submission_queue_id = 0; 00:14:19.256 json.nvme_error_information_log.table[9] = {}; 00:14:19.256 json.nvme_error_information_log.table[9].error_count = 19589; 00:14:19.256 json.nvme_error_information_log.table[9].lba = {}; 00:14:19.256 json.nvme_error_information_log.table[9].lba.value = 0; 00:14:19.256 json.nvme_error_information_log.table[9].phase_tag = false; 00:14:19.256 json.nvme_error_information_log.table[9].status_field = {}; 00:14:19.256 json.nvme_error_information_log.table[9].status_field.do_not_retry = true; 00:14:19.256 json.nvme_error_information_log.table[9].status_field.status_code = 6; 00:14:19.256 json.nvme_error_information_log.table[9].status_field.status_code_type = 0; 00:14:19.256 json.nvme_error_information_log.table[9].status_field.string = "Internal Error"; 00:14:19.256 json.nvme_error_information_log.table[9].status_field.value = 24582; 00:14:19.256 json.nvme_error_information_log.table[9].submission_queue_id = 2; 00:14:19.256 json.nvme_error_information_log.unread = 48; 00:14:19.256 json.nvme_ieee_oui_identifier = 6083300; 00:14:19.256 json.nvme_number_of_namespaces = 128; 00:14:19.256 json.nvme_pci_vendor = {}; 00:14:19.256 json.nvme_pci_vendor.id = 32902; 00:14:19.256 json.nvme_pci_vendor.subsystem_id = 32902; 00:14:19.256 json.nvme_smart_health_information_log = {}; 00:14:19.256 json.nvme_smart_health_information_log.available_spare = 99; 00:14:19.256 json.nvme_smart_health_information_log.available_spare_threshold = 10; 00:14:19.256 json.nvme_smart_health_information_log.controller_busy_time = 2527; 00:14:19.256 json.nvme_smart_health_information_log.critical_comp_time = 0; 00:14:19.256 json.nvme_smart_health_information_log.critical_warning = 0; 00:14:19.256 json.nvme_smart_health_information_log.data_units_read = 371113765; 00:14:19.256 json.nvme_smart_health_information_log.data_units_written = 510510231; 00:14:19.256 json.nvme_smart_health_information_log.host_reads = 22084650596; 00:14:19.256 json.nvme_smart_health_information_log.host_writes = 25063408073; 00:14:19.256 json.nvme_smart_health_information_log.media_errors = 0; 00:14:19.256 json.nvme_smart_health_information_log.num_err_log_entries = 19598; 00:14:19.256 json.nvme_smart_health_information_log.percentage_used = 17; 00:14:19.256 json.nvme_smart_health_information_log.power_cycles = 28; 00:14:19.256 json.nvme_smart_health_information_log.power_on_hours = 15505; 00:14:19.256 json.nvme_smart_health_information_log.temperature = 38; 00:14:19.256 json.nvme_smart_health_information_log.unsafe_shutdowns = 45; 00:14:19.256 json.nvme_smart_health_information_log.warning_temp_time = 1188; 00:14:19.256 json.nvme_total_capacity = 4000787030016; 00:14:19.256 json.nvme_unallocated_capacity = 0; 00:14:19.256 json.nvme_version = {}; 00:14:19.256 json.nvme_version.string = "1.2"; 00:14:19.256 json.nvme_version.value = 66048; 00:14:19.256 json.power_cycle_count = 28; 00:14:19.256 json.power_on_time = {}; 00:14:19.256 json.power_on_time.hours = 15505; 00:14:19.256 json.serial_number = "BTLJ83030AK84P0DGN"; 00:14:19.256 json.smartctl = {}; 00:14:19.256 json.smartctl.argv = []; 00:14:19.256 json.smartctl.argv[0] = "smartctl"; 00:14:19.256 json.smartctl.argv[1] = "-d"; 00:14:19.256 json.smartctl.argv[2] = "nvme"; 00:14:19.256 json.smartctl.argv[3] = "--json=g"; 00:14:19.256 json.smartctl.argv[4] = "-a"; 00:14:19.256 json.smartctl.build_info = "(local build)"; 00:14:19.256 json.smartctl.exit_status = 0; 00:14:19.256 json.smartctl.platform_info = "x86_64-linux-6.7.0-68.fc38.x86_64"; 00:14:19.256 json.smartctl.pre_release = false; 00:14:19.256 json.smartctl.svn_revision = "5530"; 00:14:19.256 json.smartctl.version = []; 00:14:19.256 json.smartctl.version[0] = 7; 00:14:19.256 json.smartctl.version[1] = 4; 00:14:19.256 json.smart_status = {}; 00:14:19.256 json.smart_status.nvme = {}; 00:14:19.256 json.smart_status.nvme.value = 0; 00:14:19.256 json.smart_status.passed = true; 00:14:19.256 json.smart_support = {}; 00:14:19.256 json.smart_support.available = true; 00:14:19.256 json.smart_support.enabled = true; 00:14:19.256 json.temperature = {}; 00:14:19.256 json.temperature.current = 38;' 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@51 -- # true 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@51 -- # DIFF_SMART_JSON='json.local_time.asctime = "Thu Apr 25 20:09:00 2024 CEST"; 00:14:19.256 json.local_time.time_t = 1714068540; 00:14:19.256 json.nvme_smart_health_information_log.data_units_read = 371113763; 00:14:19.256 json.nvme_smart_health_information_log.host_reads = 22084650541;' 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@54 -- # grep -v 'json\.nvme_smart_health_information_log\.\|json\.local_time\.\|json\.temperature\.\|json\.power_on_time\.hours' 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@54 -- # true 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@54 -- # ERR_SMART_JSON= 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@56 -- # '[' -n '' ']' 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@61 -- # smartctl -d nvme -l error /dev/spdk/nvme0 00:14:19.256 [2024-04-25 20:09:16.773453] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.256 20:09:16 -- cuse/spdk_smartctl_cuse.sh@61 -- # CUSE_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.256 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.256 00:14:19.256 === START OF SMART DATA SECTION === 00:14:19.256 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:14:19.257 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:14:19.257 0 19598 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 1 19597 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 2 19596 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 3 19595 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 4 19594 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 5 19593 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 6 19592 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 7 19591 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 8 19590 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 9 19589 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 10 19588 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 11 19587 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 12 19586 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 13 19585 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 14 19584 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 15 19583 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 ... (48 entries not read)' 00:14:19.257 20:09:16 -- cuse/spdk_smartctl_cuse.sh@62 -- # '[' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 === START OF SMART DATA SECTION === 00:14:19.257 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:14:19.257 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:14:19.257 0 19598 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 1 19597 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 2 19596 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 3 19595 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 4 19594 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 5 19593 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 6 19592 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 7 19591 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 8 19590 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 9 19589 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 10 19588 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 11 19587 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 12 19586 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 13 19585 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 14 19584 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 15 19583 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 ... (48 entries not read)' '!=' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 === START OF SMART DATA SECTION === 00:14:19.257 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:14:19.257 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:14:19.257 0 19598 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 1 19597 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 2 19596 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 3 19595 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 4 19594 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 5 19593 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 6 19592 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 7 19591 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 8 19590 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 9 19589 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 10 19588 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 11 19587 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 12 19586 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 13 19585 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 14 19584 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 15 19583 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 ... (48 entries not read)' ']' 00:14:19.257 20:09:16 -- cuse/spdk_smartctl_cuse.sh@68 -- # smartctl -d nvme -i /dev/spdk/nvme0n1 00:14:19.257 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 === START OF INFORMATION SECTION === 00:14:19.257 Model Number: INTEL SSDPE2KX040T8 00:14:19.257 Serial Number: BTLJ83030AK84P0DGN 00:14:19.257 Firmware Version: VDV10184 00:14:19.257 PCI Vendor/Subsystem ID: 0x8086 00:14:19.257 IEEE OUI Identifier: 0x5cd2e4 00:14:19.257 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:14:19.257 Unallocated NVM Capacity: 0 00:14:19.257 Controller ID: 0 00:14:19.257 NVMe Version: 1.2 00:14:19.257 Number of Namespaces: 128 00:14:19.257 Namespace 1 Size/Capacity: 4,000,787,030,016 [4.00 TB] 00:14:19.257 Namespace 1 Formatted LBA Size: 512 00:14:19.257 Namespace 1 IEEE EUI-64: 000000 0000000f3d 00:14:19.257 Local Time is: Thu Apr 25 20:09:16 2024 CEST 00:14:19.257 00:14:19.257 20:09:16 -- cuse/spdk_smartctl_cuse.sh@69 -- # smartctl -d nvme -c /dev/spdk/nvme0 00:14:19.257 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 [2024-04-25 20:09:16.910728] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.257 === START OF INFORMATION SECTION === 00:14:19.257 Firmware Updates (0x18): 4 Slots, no Reset required 00:14:19.257 Optional Admin Commands (0x000e): Format Frmw_DL NS_Mngmt 00:14:19.257 Optional NVM Commands (0x0006): Wr_Unc DS_Mngmt 00:14:19.257 Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg 00:14:19.257 Maximum Data Transfer Size: 32 Pages 00:14:19.257 Warning Comp. Temp. Threshold: 70 Celsius 00:14:19.257 Critical Comp. Temp. Threshold: 80 Celsius 00:14:19.257 00:14:19.257 Supported Power States 00:14:19.257 St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 00:14:19.257 0 + 20.00W - - 0 0 0 0 0 0 00:14:19.257 00:14:19.257 20:09:16 -- cuse/spdk_smartctl_cuse.sh@70 -- # smartctl -d nvme -A /dev/spdk/nvme0 00:14:19.257 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 [2024-04-25 20:09:16.957284] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.257 === START OF SMART DATA SECTION === 00:14:19.257 SMART/Health Information (NVMe Log 0x02) 00:14:19.257 Critical Warning: 0x00 00:14:19.257 Temperature: 38 Celsius 00:14:19.257 Available Spare: 99% 00:14:19.257 Available Spare Threshold: 10% 00:14:19.257 Percentage Used: 17% 00:14:19.257 Data Units Read: 371,113,765 [190 TB] 00:14:19.257 Data Units Written: 510,510,231 [261 TB] 00:14:19.257 Host Read Commands: 22,084,650,596 00:14:19.257 Host Write Commands: 25,063,408,073 00:14:19.257 Controller Busy Time: 2,527 00:14:19.257 Power Cycles: 28 00:14:19.257 Power On Hours: 15,505 00:14:19.257 Unsafe Shutdowns: 45 00:14:19.257 Media and Data Integrity Errors: 0 00:14:19.257 Error Information Log Entries: 19,598 00:14:19.257 Warning Comp. Temperature Time: 1188 00:14:19.257 Critical Comp. Temperature Time: 0 00:14:19.257 00:14:19.257 20:09:16 -- cuse/spdk_smartctl_cuse.sh@73 -- # smartctl -d nvme -x /dev/spdk/nvme0 00:14:19.257 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 [2024-04-25 20:09:17.015355] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.257 === START OF INFORMATION SECTION === 00:14:19.257 Model Number: INTEL SSDPE2KX040T8 00:14:19.257 Serial Number: BTLJ83030AK84P0DGN 00:14:19.257 Firmware Version: VDV10184 00:14:19.257 PCI Vendor/Subsystem ID: 0x8086 00:14:19.257 IEEE OUI Identifier: 0x5cd2e4 00:14:19.257 Total NVM Capacity: 4,000,787,030,016 [4.00 TB] 00:14:19.257 Unallocated NVM Capacity: 0 00:14:19.257 Controller ID: 0 00:14:19.257 NVMe Version: 1.2 00:14:19.257 Number of Namespaces: 128 00:14:19.257 Local Time is: Thu Apr 25 20:09:17 2024 CEST 00:14:19.257 Firmware Updates (0x18): 4 Slots, no Reset required 00:14:19.257 Optional Admin Commands (0x000e): Format Frmw_DL NS_Mngmt 00:14:19.257 Optional NVM Commands (0x0006): Wr_Unc DS_Mngmt 00:14:19.257 Log Page Attributes (0x0e): Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg 00:14:19.257 Maximum Data Transfer Size: 32 Pages 00:14:19.257 Warning Comp. Temp. Threshold: 70 Celsius 00:14:19.257 Critical Comp. Temp. Threshold: 80 Celsius 00:14:19.257 00:14:19.257 Supported Power States 00:14:19.257 St Op Max Active Idle RL RT WL WT Ent_Lat Ex_Lat 00:14:19.257 0 + 20.00W - - 0 0 0 0 0 0 00:14:19.257 00:14:19.257 === START OF SMART DATA SECTION === 00:14:19.257 SMART overall-health self-assessment test result: PASSED 00:14:19.257 00:14:19.257 SMART/Health Information (NVMe Log 0x02) 00:14:19.257 Critical Warning: 0x00 00:14:19.257 Temperature: 38 Celsius 00:14:19.257 Available Spare: 99% 00:14:19.257 Available Spare Threshold: 10% 00:14:19.257 Percentage Used: 17% 00:14:19.257 Data Units Read: 371,113,765 [190 TB] 00:14:19.257 Data Units Written: 510,510,231 [261 TB] 00:14:19.257 Host Read Commands: 22,084,650,596 00:14:19.257 Host Write Commands: 25,063,408,073 00:14:19.257 Controller Busy Time: 2,527 00:14:19.257 Power Cycles: 28 00:14:19.257 Power On Hours: 15,505 00:14:19.257 Unsafe Shutdowns: 45 00:14:19.257 Media and Data Integrity Errors: 0 00:14:19.257 Error Information Log Entries: 19,598 00:14:19.257 Warning Comp. Temperature Time: 1188 00:14:19.257 Critical Comp. Temperature Time: 0 00:14:19.257 00:14:19.257 Error Information (NVMe Log 0x01, 16 of 64 entries) 00:14:19.257 Num ErrCount SQId CmdId Status PELoc LBA NSID VS Message 00:14:19.257 0 19598 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 1 19597 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 2 19596 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 3 19595 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 4 19594 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 5 19593 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 6 19592 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 7 19591 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 8 19590 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 9 19589 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 10 19588 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 11 19587 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 12 19586 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 13 19585 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 14 19584 0 - 0xc00c - 0 - - Internal Error 00:14:19.257 15 19583 2 - 0xc00c - 0 - - Internal Error 00:14:19.257 ... (48 entries not read) 00:14:19.257 00:14:19.257 Self-tests not supported 00:14:19.257 00:14:19.257 20:09:17 -- cuse/spdk_smartctl_cuse.sh@74 -- # smartctl -d nvme -H /dev/spdk/nvme0 00:14:19.257 smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.7.0-68.fc38.x86_64] (local build) 00:14:19.257 Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org 00:14:19.257 00:14:19.257 [2024-04-25 20:09:17.098089] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40. 00:14:19.257 === START OF SMART DATA SECTION === 00:14:19.257 SMART overall-health self-assessment test result: PASSED 00:14:19.257 00:14:19.257 20:09:17 -- cuse/spdk_smartctl_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:14:23.449 20:09:21 -- cuse/spdk_smartctl_cuse.sh@77 -- # sleep 1 00:14:24.386 20:09:22 -- cuse/spdk_smartctl_cuse.sh@78 -- # '[' -c /dev/spdk/nvme1 ']' 00:14:24.386 20:09:22 -- cuse/spdk_smartctl_cuse.sh@82 -- # trap - SIGINT SIGTERM EXIT 00:14:24.386 20:09:22 -- cuse/spdk_smartctl_cuse.sh@83 -- # killprocess 2122594 00:14:24.386 20:09:22 -- common/autotest_common.sh@926 -- # '[' -z 2122594 ']' 00:14:24.386 20:09:22 -- common/autotest_common.sh@930 -- # kill -0 2122594 00:14:24.386 20:09:22 -- common/autotest_common.sh@931 -- # uname 00:14:24.645 20:09:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:24.645 20:09:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2122594 00:14:24.645 20:09:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:24.645 20:09:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:24.645 20:09:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2122594' 00:14:24.645 killing process with pid 2122594 00:14:24.645 20:09:22 -- common/autotest_common.sh@945 -- # kill 2122594 00:14:24.645 20:09:22 -- common/autotest_common.sh@950 -- # wait 2122594 00:14:25.214 00:14:25.214 real 0m31.909s 00:14:25.214 user 0m33.811s 00:14:25.214 sys 0m7.453s 00:14:25.214 20:09:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:25.214 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:25.214 ************************************ 00:14:25.214 END TEST nvme_smartctl_cuse 00:14:25.214 ************************************ 00:14:25.214 20:09:22 -- cuse/nvme_cuse.sh@22 -- # run_test nvme_ns_manage_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh 00:14:25.214 20:09:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:25.214 20:09:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.214 20:09:22 -- common/autotest_common.sh@10 -- # set +x 00:14:25.214 ************************************ 00:14:25.214 START TEST nvme_ns_manage_cuse 00:14:25.214 ************************************ 00:14:25.214 20:09:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh 00:14:25.214 * Looking for test storage... 00:14:25.214 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse 00:14:25.214 20:09:23 -- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:14:25.214 20:09:23 -- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh 00:14:25.214 20:09:23 -- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../ 00:14:25.214 20:09:23 -- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk 00:14:25.214 20:09:23 -- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:14:25.214 20:09:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.214 20:09:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.214 20:09:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.214 20:09:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.214 20:09:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.214 20:09:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.214 20:09:23 -- paths/export.sh@5 -- # export PATH 00:14:25.214 20:09:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.214 20:09:23 -- nvme/functions.sh@10 -- # ctrls=() 00:14:25.214 20:09:23 -- nvme/functions.sh@10 -- # declare -A ctrls 00:14:25.214 20:09:23 -- nvme/functions.sh@11 -- # nvmes=() 00:14:25.214 20:09:23 -- nvme/functions.sh@11 -- # declare -A nvmes 00:14:25.214 20:09:23 -- nvme/functions.sh@12 -- # bdfs=() 00:14:25.214 20:09:23 -- nvme/functions.sh@12 -- # declare -A bdfs 00:14:25.214 20:09:23 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:14:25.214 20:09:23 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:14:25.214 20:09:23 -- nvme/functions.sh@14 -- # nvme_name= 00:14:25.214 20:09:23 -- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:14:25.214 20:09:23 -- cuse/nvme_ns_manage_cuse.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:14:28.499 Waiting for block devices as requested 00:14:28.499 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:14:28.499 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:14:28.499 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:14:28.499 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:14:28.499 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:14:28.499 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:14:28.499 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:14:28.759 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:14:28.759 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:14:28.759 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:14:29.018 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:14:29.018 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:14:29.018 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:14:29.278 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:14:29.278 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:14:29.278 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:14:29.540 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:14:29.540 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@11 -- # scan_nvme_ctrls 00:14:29.540 20:09:27 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:14:29.540 20:09:27 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:14:29.540 20:09:27 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@49 -- # pci=0000:5e:00.0 00:14:29.540 20:09:27 -- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0 00:14:29.540 20:09:27 -- scripts/common.sh@15 -- # local i 00:14:29.540 20:09:27 -- scripts/common.sh@18 -- # [[ =~ 0000:5e:00.0 ]] 00:14:29.540 20:09:27 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:14:29.540 20:09:27 -- scripts/common.sh@24 -- # return 0 00:14:29.540 20:09:27 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:14:29.540 20:09:27 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:14:29.540 20:09:27 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@18 -- # shift 00:14:29.540 20:09:27 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[vid]=0x8086 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x8086 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n BTLJ83030AK84P0DGN ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN "' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN ' 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n INTEL SSDPE2KX040T8 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8 "' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8 ' 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n VDV10184 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rab]=0 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 5cd2e4 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 5 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mdts]=5 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x10200 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10200 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x989680 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0xe4e1c0 ]] 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"' 00:14:29.540 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.540 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.540 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x200 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x200 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mec]=1 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[oacs]=0xe 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x18 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x18 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0xe ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[lpa]=0xe 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 63 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[elpe]=63 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 353 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[cctemp]=353 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.541 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:14:29.541 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:14:29.541 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[nn]=128 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x6 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x6 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fna]=0x4 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[vwc]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[sgls]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[subnqn]= 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.542 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:14:29.542 20:09:27 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:14:29.542 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0' 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:14:29.543 20:09:27 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:14:29.543 20:09:27 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:14:29.543 20:09:27 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:14:29.543 20:09:27 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@18 -- # shift 00:14:29.543 20:09:27 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0x1d1c0beb0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 4,000,787,030,016 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=0 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.543 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.543 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"' 00:14:29.543 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 010000000f3d00000000000000000000 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000000f3d00000000000000000000"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000000f3d00000000000000000000 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n 0000000000000f3d ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000f3d"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000f3d 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0x2 (in use) ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0x2 (in use)"' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0x2 (in use)' 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0 lbads:12 rp:0 "' 00:14:29.544 20:09:27 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0 lbads:12 rp:0 ' 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # IFS=: 00:14:29.544 20:09:27 -- nvme/functions.sh@21 -- # read -r reg val 00:14:29.544 20:09:27 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:14:29.544 20:09:27 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:14:29.544 20:09:27 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0 00:14:29.544 20:09:27 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@14 -- # get_nvme_with_ns_management 00:14:29.544 20:09:27 -- nvme/functions.sh@153 -- # local _ctrls 00:14:29.544 20:09:27 -- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management)) 00:14:29.544 20:09:27 -- nvme/functions.sh@155 -- # get_nvmes_with_ns_management 00:14:29.544 20:09:27 -- nvme/functions.sh@144 -- # (( 1 == 0 )) 00:14:29.544 20:09:27 -- nvme/functions.sh@146 -- # local ctrl 00:14:29.544 20:09:27 -- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}" 00:14:29.544 20:09:27 -- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt 00:14:29.544 20:09:27 -- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt 00:14:29.544 20:09:27 -- nvme/functions.sh@122 -- # local -A bits 00:14:29.544 20:09:27 -- nvme/functions.sh@125 -- # bits["ss/sr"]=1 00:14:29.544 20:09:27 -- nvme/functions.sh@126 -- # bits["fnvme"]=2 00:14:29.544 20:09:27 -- nvme/functions.sh@127 -- # bits["fc/fi"]=4 00:14:29.544 20:09:27 -- nvme/functions.sh@128 -- # bits["nsmgt"]=8 00:14:29.544 20:09:27 -- nvme/functions.sh@129 -- # bits["self-test"]=16 00:14:29.544 20:09:27 -- nvme/functions.sh@130 -- # bits["directives"]=32 00:14:29.544 20:09:27 -- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64 00:14:29.544 20:09:27 -- nvme/functions.sh@132 -- # bits["virtmgt"]=128 00:14:29.544 20:09:27 -- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256 00:14:29.544 20:09:27 -- nvme/functions.sh@134 -- # bits["getlba"]=512 00:14:29.544 20:09:27 -- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024 00:14:29.544 20:09:27 -- nvme/functions.sh@137 -- # bit=nsmgt 00:14:29.544 20:09:27 -- nvme/functions.sh@138 -- # [[ -n 8 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs 00:14:29.544 20:09:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs 00:14:29.544 20:09:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@75 -- # [[ -n 0xe ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@76 -- # echo 0xe 00:14:29.544 20:09:27 -- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] )) 00:14:29.544 20:09:27 -- nvme/functions.sh@148 -- # echo nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@156 -- # (( 1 > 0 )) 00:14:29.544 20:09:27 -- nvme/functions.sh@157 -- # echo nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@158 -- # return 0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@14 -- # nvme_name=nvme0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@20 -- # nvme_dev=/dev/nvme0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@21 -- # bdf=0000:5e:00.0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@22 -- # nsids=($(get_nvme_nss "$nvme_name")) 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@22 -- # get_nvme_nss nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@94 -- # local ctrl=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@96 -- # [[ -n nvme0_ns ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@97 -- # local -n _nss=nvme0_ns 00:14:29.544 20:09:27 -- nvme/functions.sh@99 -- # echo 1 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@25 -- # get_nvme_ctrl_feature nvme0 oaes 00:14:29.544 20:09:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oaes 00:14:29.544 20:09:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@75 -- # [[ -n 0x200 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@76 -- # echo 0x200 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@25 -- # oaes=0x200 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@26 -- # aer_ns_change=0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@27 -- # get_nvme_ctrl_feature nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=cntlid 00:14:29.544 20:09:27 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:14:29.544 20:09:27 -- nvme/functions.sh@75 -- # [[ -n 0 ]] 00:14:29.544 20:09:27 -- nvme/functions.sh@76 -- # echo 0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@27 -- # cntlid=0 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@70 -- # remove_all_namespaces 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces' 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:14:29.544 --- 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces' 00:14:29.544 delete all namespaces 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:14:29.544 --- 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}" 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1' 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:14:29.544 --- 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1' 00:14:29.544 removing nsid=1 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:14:29.544 --- 00:14:29.544 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0 00:14:29.803 detach-ns: Success, nsid:1 00:14:29.803 20:09:27 -- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1 00:14:47.952 delete-ns: Success, deleted nsid:1 00:14:47.952 20:09:45 -- cuse/nvme_ns_manage_cuse.sh@72 -- # reset_nvme_if_aer_unsupported /dev/nvme0 00:14:47.952 20:09:45 -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:14:47.952 20:09:45 -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:14:48.887 20:09:46 -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:14:48.887 20:09:46 -- cuse/nvme_ns_manage_cuse.sh@73 -- # sleep 1 00:14:49.824 20:09:47 -- cuse/nvme_ns_manage_cuse.sh@75 -- # PCI_ALLOWED=0000:5e:00.0 00:14:49.824 20:09:47 -- cuse/nvme_ns_manage_cuse.sh@75 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:14:52.362 0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0 00:14:52.362 0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1 00:14:52.362 0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2 00:14:52.362 0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3 00:14:52.362 0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4 00:14:52.362 0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5 00:14:52.362 0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6 00:14:52.362 0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7 00:14:52.362 0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0 00:14:52.362 0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1 00:14:52.362 0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2 00:14:52.621 0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3 00:14:52.621 0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4 00:14:52.621 0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5 00:14:52.621 0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6 00:14:52.621 0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7 00:14:55.909 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:14:55.909 20:09:53 -- cuse/nvme_ns_manage_cuse.sh@78 -- # spdk_tgt_pid=2127865 00:14:55.909 20:09:53 -- cuse/nvme_ns_manage_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:14:55.909 20:09:53 -- cuse/nvme_ns_manage_cuse.sh@79 -- # trap 'kill -9 ${spdk_tgt_pid}; clean_up; exit 1' SIGINT SIGTERM EXIT 00:14:55.909 20:09:53 -- cuse/nvme_ns_manage_cuse.sh@81 -- # waitforlisten 2127865 00:14:55.909 20:09:53 -- common/autotest_common.sh@819 -- # '[' -z 2127865 ']' 00:14:55.909 20:09:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.909 20:09:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:55.909 20:09:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.909 20:09:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:55.909 20:09:53 -- common/autotest_common.sh@10 -- # set +x 00:14:55.909 [2024-04-25 20:09:53.809819] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:14:55.909 [2024-04-25 20:09:53.809879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127865 ] 00:14:55.909 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.168 [2024-04-25 20:09:53.903500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.168 [2024-04-25 20:09:53.998226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.168 [2024-04-25 20:09:53.998433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.168 [2024-04-25 20:09:53.998437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.426 [2024-04-25 20:09:54.185826] 'OCF_Core' volume operations registered 00:14:56.426 [2024-04-25 20:09:54.189307] 'OCF_Cache' volume operations registered 00:14:56.426 [2024-04-25 20:09:54.193263] 'OCF Composite' volume operations registered 00:14:56.426 [2024-04-25 20:09:54.196766] 'SPDK_block_device' volume operations registered 00:14:56.993 20:09:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:56.993 20:09:54 -- common/autotest_common.sh@852 -- # return 0 00:14:56.993 20:09:54 -- cuse/nvme_ns_manage_cuse.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:15:00.281 00:15:00.281 20:09:57 -- cuse/nvme_ns_manage_cuse.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0 00:15:00.281 [2024-04-25 20:09:58.026152] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:15:00.281 [2024-04-25 20:09:58.026317] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:15:00.282 20:09:58 -- cuse/nvme_ns_manage_cuse.sh@86 -- # ctrlr=/dev/spdk/nvme0 00:15:00.282 20:09:58 -- cuse/nvme_ns_manage_cuse.sh@88 -- # sleep 1 00:15:01.218 20:09:59 -- cuse/nvme_ns_manage_cuse.sh@89 -- # [[ -c /dev/spdk/nvme0 ]] 00:15:01.218 20:09:59 -- cuse/nvme_ns_manage_cuse.sh@94 -- # sleep 1 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@96 -- # for nsid in "${nsids[@]}" 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@97 -- # info_print 'create ns: nsze=10000 ncap=10000 flbias=0' 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:02.154 --- 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'create ns: nsze=10000 ncap=10000 flbias=0' 00:15:02.154 create ns: nsze=10000 ncap=10000 flbias=0 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:02.154 --- 00:15:02.154 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@98 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/spdk/nvme0 -s 10000 -c 10000 -f 0 00:15:02.721 create-ns: Success, created nsid:1 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@99 -- # info_print 'attach ns: nsid=1 controller=0' 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:02.721 --- 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'attach ns: nsid=1 controller=0' 00:15:02.721 attach ns: nsid=1 controller=0 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:02.721 --- 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@100 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/spdk/nvme0 -n 1 -c 0 00:15:02.721 attach-ns: Success, nsid:1 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@101 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:15:02.721 20:10:00 -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:15:04.098 20:10:01 -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:15:04.098 [2024-04-25 20:10:01.647022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:15:04.098 [2024-04-25 20:10:01.648039] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:15:04.098 20:10:01 -- cuse/nvme_ns_manage_cuse.sh@102 -- # sleep 1 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@103 -- # [[ -c /dev/spdk/nvme0n1 ]] 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@104 -- # info_print 'detach ns: nsid=1 controller=0' 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:05.040 --- 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'detach ns: nsid=1 controller=0' 00:15:05.040 detach ns: nsid=1 controller=0 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:05.040 --- 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@105 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/spdk/nvme0 -n 1 -c 0 00:15:05.040 detach-ns: Success, nsid:1 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@106 -- # info_print 'delete ns: nsid=1' 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:05.040 --- 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete ns: nsid=1' 00:15:05.040 delete ns: nsid=1 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:05.040 --- 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/spdk/nvme0 -n 1 00:15:05.040 delete-ns: Success, deleted nsid:1 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@108 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]] 00:15:05.040 20:10:02 -- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1 00:15:05.976 20:10:03 -- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0 00:15:05.976 [2024-04-25 20:10:03.713005] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller 00:15:06.235 20:10:04 -- cuse/nvme_ns_manage_cuse.sh@109 -- # sleep 1 00:15:07.610 20:10:05 -- cuse/nvme_ns_manage_cuse.sh@110 -- # [[ ! -c /dev/spdk/nvme0n1 ]] 00:15:07.610 20:10:05 -- cuse/nvme_ns_manage_cuse.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:11.849 20:10:09 -- cuse/nvme_ns_manage_cuse.sh@120 -- # sleep 1 00:15:12.442 20:10:10 -- cuse/nvme_ns_manage_cuse.sh@121 -- # [[ ! -c /dev/spdk/nvme0 ]] 00:15:12.442 20:10:10 -- cuse/nvme_ns_manage_cuse.sh@123 -- # trap - SIGINT SIGTERM EXIT 00:15:12.442 20:10:10 -- cuse/nvme_ns_manage_cuse.sh@124 -- # killprocess 2127865 00:15:12.442 20:10:10 -- common/autotest_common.sh@926 -- # '[' -z 2127865 ']' 00:15:12.442 20:10:10 -- common/autotest_common.sh@930 -- # kill -0 2127865 00:15:12.442 20:10:10 -- common/autotest_common.sh@931 -- # uname 00:15:12.442 20:10:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:12.442 20:10:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2127865 00:15:12.701 20:10:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:12.701 20:10:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:12.701 20:10:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2127865' 00:15:12.701 killing process with pid 2127865 00:15:12.701 20:10:10 -- common/autotest_common.sh@945 -- # kill 2127865 00:15:12.701 20:10:10 -- common/autotest_common.sh@950 -- # wait 2127865 00:15:13.270 20:10:10 -- cuse/nvme_ns_manage_cuse.sh@125 -- # clean_up 00:15:13.270 20:10:10 -- cuse/nvme_ns_manage_cuse.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:15:15.806 Waiting for block devices as requested 00:15:15.806 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:15:15.806 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:15:15.806 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:15:15.806 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:15:15.806 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:15:15.806 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:15:16.065 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:15:16.324 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:15:16.324 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:15:21.603 * Events for some block/disk devices (0000:5e:00.0) were not caught, they may be missing 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@48 -- # remove_all_namespaces 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces' 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:21.603 --- 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces' 00:15:21.603 delete all namespaces 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:21.603 --- 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}" 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1' 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@64 -- # echo --- 00:15:21.603 --- 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1' 00:15:21.603 removing nsid=1 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@66 -- # echo --- 00:15:21.603 --- 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0 00:15:21.603 NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002) 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@41 -- # true 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1 00:15:21.603 NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002) 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@42 -- # true 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@50 -- # echo 'Restoring /dev/nvme0...' 00:15:21.603 Restoring /dev/nvme0... 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@51 -- # for nsid in "${nsids[@]}" 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@52 -- # get_nvme_ns_feature nvme0 1 ncap 00:15:21.603 20:10:18 -- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=ncap 00:15:21.603 20:10:18 -- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns 00:15:21.603 20:10:18 -- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@87 -- # local -n _ns=nvme0n1 00:15:21.603 20:10:18 -- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@90 -- # echo 0x1d1c0beb0 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@52 -- # ncap=0x1d1c0beb0 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@53 -- # get_nvme_ns_feature nvme0 1 nsze 00:15:21.603 20:10:18 -- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=nsze 00:15:21.603 20:10:18 -- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns 00:15:21.603 20:10:18 -- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@87 -- # local -n _ns=nvme0n1 00:15:21.603 20:10:18 -- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@90 -- # echo 0x1d1c0beb0 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@53 -- # nsze=0x1d1c0beb0 00:15:21.603 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@54 -- # get_active_lbaf nvme0 1 00:15:21.603 20:10:18 -- nvme/functions.sh@103 -- # local ctrl=nvme0 ns=1 reg lbaf 00:15:21.603 20:10:18 -- nvme/functions.sh@105 -- # [[ -n nvme0_ns ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@107 -- # local -n _nss=nvme0_ns 00:15:21.603 20:10:18 -- nvme/functions.sh@108 -- # [[ -n nvme0n1 ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@110 -- # local -n _ns=nvme0n1 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ fpi == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nawupf == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nsfeat == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ endgid == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nawun == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nabspf == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nabo == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nabsn == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nulbaf == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ ncap == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ dpc == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ dps == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nguid == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ noiob == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nacwu == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ mssrl == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ dlfeat == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nlbaf == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ mc == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nmic == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ nvmsetid == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # continue 00:15:21.603 20:10:18 -- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}" 00:15:21.603 20:10:18 -- nvme/functions.sh@113 -- # [[ lbaf0 == lbaf* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@114 -- # [[ ms:0 lbads:9 rp:0x2 (in use) == *\i\n\ \u\s\e* ]] 00:15:21.603 20:10:18 -- nvme/functions.sh@115 -- # echo 0 00:15:21.604 20:10:18 -- nvme/functions.sh@115 -- # return 0 00:15:21.604 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@54 -- # lbaf=0 00:15:21.604 20:10:18 -- cuse/nvme_ns_manage_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/nvme0 -s 0x1d1c0beb0 -c 0x1d1c0beb0 -f 0 00:15:21.604 create-ns: Success, created nsid:1 00:15:21.604 20:10:19 -- cuse/nvme_ns_manage_cuse.sh@56 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/nvme0 -n 1 -c 0 00:15:21.604 attach-ns: Success, nsid:1 00:15:21.604 20:10:19 -- cuse/nvme_ns_manage_cuse.sh@57 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0 00:15:21.604 20:10:19 -- cuse/nvme_ns_manage_cuse.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:15:24.137 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:15:24.137 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:15:24.397 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:15:27.688 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:15:27.688 00:15:27.688 real 1m2.514s 00:15:27.688 user 0m37.241s 00:15:27.688 sys 0m9.412s 00:15:27.688 20:10:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.688 20:10:25 -- common/autotest_common.sh@10 -- # set +x 00:15:27.688 ************************************ 00:15:27.688 END TEST nvme_ns_manage_cuse 00:15:27.688 ************************************ 00:15:27.688 20:10:25 -- cuse/nvme_cuse.sh@23 -- # rmmod cuse 00:15:27.689 20:10:25 -- cuse/nvme_cuse.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:15:30.222 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:15:30.222 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:15:30.222 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:15:30.222 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:15:30.480 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:15:30.481 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:15:30.481 00:15:30.481 real 2m55.382s 00:15:30.481 user 2m23.679s 00:15:30.481 sys 0m34.084s 00:15:30.481 20:10:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.481 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.481 ************************************ 00:15:30.481 END TEST nvme_cuse 00:15:30.481 ************************************ 00:15:30.481 20:10:28 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:15:30.481 20:10:28 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:15:30.481 20:10:28 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:15:30.481 20:10:28 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh 00:15:30.481 20:10:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:30.481 20:10:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:30.481 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.481 ************************************ 00:15:30.481 START TEST nvme_rpc 00:15:30.481 ************************************ 00:15:30.481 20:10:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh 00:15:30.739 * Looking for test storage... 00:15:30.739 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:15:30.739 20:10:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:15:30.739 20:10:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:15:30.739 20:10:28 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:15:30.739 20:10:28 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:15:30.739 20:10:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:30.739 20:10:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:15:30.739 20:10:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:30.739 20:10:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:15:30.739 20:10:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:30.739 20:10:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:15:30.739 20:10:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:15:30.739 20:10:28 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:5e:00.0 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=2134526 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 2134526 00:15:30.739 20:10:28 -- common/autotest_common.sh@819 -- # '[' -z 2134526 ']' 00:15:30.739 20:10:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.739 20:10:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:30.739 20:10:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.739 20:10:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:30.739 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:15:30.739 20:10:28 -- nvme/nvme_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:15:30.739 [2024-04-25 20:10:28.581157] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:30.739 [2024-04-25 20:10:28.581230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134526 ] 00:15:30.739 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.999 [2024-04-25 20:10:28.677760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:30.999 [2024-04-25 20:10:28.780468] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:30.999 [2024-04-25 20:10:28.780726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.999 [2024-04-25 20:10:28.780731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.257 [2024-04-25 20:10:28.980408] 'OCF_Core' volume operations registered 00:15:31.257 [2024-04-25 20:10:28.983888] 'OCF_Cache' volume operations registered 00:15:31.257 [2024-04-25 20:10:28.987813] 'OCF Composite' volume operations registered 00:15:31.257 [2024-04-25 20:10:28.991281] 'SPDK_block_device' volume operations registered 00:15:31.824 20:10:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:31.824 20:10:29 -- common/autotest_common.sh@852 -- # return 0 00:15:31.824 20:10:29 -- nvme/nvme_rpc.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:15:35.129 Nvme0n1 00:15:35.129 20:10:32 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:15:35.129 20:10:32 -- nvme/nvme_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:15:35.129 request: 00:15:35.129 { 00:15:35.129 "filename": "non_existing_file", 00:15:35.129 "bdev_name": "Nvme0n1", 00:15:35.129 "method": "bdev_nvme_apply_firmware", 00:15:35.129 "req_id": 1 00:15:35.129 } 00:15:35.129 Got JSON-RPC error response 00:15:35.129 response: 00:15:35.129 { 00:15:35.129 "code": -32603, 00:15:35.129 "message": "open file failed." 00:15:35.129 } 00:15:35.129 20:10:32 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:15:35.129 20:10:32 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:15:35.129 20:10:32 -- nvme/nvme_rpc.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:39.354 20:10:36 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:39.354 20:10:36 -- nvme/nvme_rpc.sh@40 -- # killprocess 2134526 00:15:39.354 20:10:36 -- common/autotest_common.sh@926 -- # '[' -z 2134526 ']' 00:15:39.354 20:10:36 -- common/autotest_common.sh@930 -- # kill -0 2134526 00:15:39.354 20:10:36 -- common/autotest_common.sh@931 -- # uname 00:15:39.354 20:10:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:39.354 20:10:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2134526 00:15:39.354 20:10:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:39.354 20:10:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:39.354 20:10:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2134526' 00:15:39.354 killing process with pid 2134526 00:15:39.354 20:10:36 -- common/autotest_common.sh@945 -- # kill 2134526 00:15:39.354 20:10:36 -- common/autotest_common.sh@950 -- # wait 2134526 00:15:39.613 00:15:39.613 real 0m8.938s 00:15:39.613 user 0m16.926s 00:15:39.613 sys 0m0.855s 00:15:39.613 20:10:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.613 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:39.613 ************************************ 00:15:39.613 END TEST nvme_rpc 00:15:39.613 ************************************ 00:15:39.613 20:10:37 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:39.613 20:10:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:39.613 20:10:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.613 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:39.613 ************************************ 00:15:39.613 START TEST nvme_rpc_timeouts 00:15:39.613 ************************************ 00:15:39.613 20:10:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh 00:15:39.613 * Looking for test storage... 00:15:39.613 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_2135633 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_2135633 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=2135656 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:15:39.613 20:10:37 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 2135656 00:15:39.613 20:10:37 -- common/autotest_common.sh@819 -- # '[' -z 2135656 ']' 00:15:39.613 20:10:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.613 20:10:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:39.613 20:10:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.613 20:10:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:39.613 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:15:39.613 [2024-04-25 20:10:37.499246] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:15:39.613 [2024-04-25 20:10:37.499327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135656 ] 00:15:39.613 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.872 [2024-04-25 20:10:37.605518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:39.872 [2024-04-25 20:10:37.701229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:39.872 [2024-04-25 20:10:37.701423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:39.872 [2024-04-25 20:10:37.701428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.130 [2024-04-25 20:10:37.903697] 'OCF_Core' volume operations registered 00:15:40.130 [2024-04-25 20:10:37.907188] 'OCF_Cache' volume operations registered 00:15:40.130 [2024-04-25 20:10:37.911361] 'OCF Composite' volume operations registered 00:15:40.130 [2024-04-25 20:10:37.914852] 'SPDK_block_device' volume operations registered 00:15:40.696 20:10:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:40.696 20:10:38 -- common/autotest_common.sh@852 -- # return 0 00:15:40.696 20:10:38 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:15:40.696 Checking default timeout settings: 00:15:40.696 20:10:38 -- nvme/nvme_rpc_timeouts.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config 00:15:40.954 20:10:38 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:15:40.954 Making settings changes with rpc: 00:15:40.954 20:10:38 -- nvme/nvme_rpc_timeouts.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:15:41.211 20:10:38 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:15:41.211 Check default vs. modified settings: 00:15:41.211 20:10:38 -- nvme/nvme_rpc_timeouts.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_2135633 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_2135633 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:15:41.469 20:10:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:15:41.469 Setting action_on_timeout is changed as expected. 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_2135633 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_2135633 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:15:41.470 Setting timeout_us is changed as expected. 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_2135633 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_2135633 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:15:41.470 Setting timeout_admin_us is changed as expected. 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_2135633 /tmp/settings_modified_2135633 00:15:41.470 20:10:39 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 2135656 00:15:41.470 20:10:39 -- common/autotest_common.sh@926 -- # '[' -z 2135656 ']' 00:15:41.470 20:10:39 -- common/autotest_common.sh@930 -- # kill -0 2135656 00:15:41.470 20:10:39 -- common/autotest_common.sh@931 -- # uname 00:15:41.470 20:10:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:41.470 20:10:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2135656 00:15:41.728 20:10:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:41.728 20:10:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:41.728 20:10:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2135656' 00:15:41.728 killing process with pid 2135656 00:15:41.728 20:10:39 -- common/autotest_common.sh@945 -- # kill 2135656 00:15:41.728 20:10:39 -- common/autotest_common.sh@950 -- # wait 2135656 00:15:42.295 20:10:39 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:15:42.295 RPC TIMEOUT SETTING TEST PASSED. 00:15:42.295 00:15:42.295 real 0m2.661s 00:15:42.295 user 0m5.314s 00:15:42.295 sys 0m0.797s 00:15:42.295 20:10:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.295 20:10:39 -- common/autotest_common.sh@10 -- # set +x 00:15:42.295 ************************************ 00:15:42.295 END TEST nvme_rpc_timeouts 00:15:42.295 ************************************ 00:15:42.295 20:10:40 -- spdk/autotest.sh@251 -- # '[' 0 -eq 0 ']' 00:15:42.295 20:10:40 -- spdk/autotest.sh@251 -- # uname -s 00:15:42.295 20:10:40 -- spdk/autotest.sh@251 -- # '[' Linux = Linux ']' 00:15:42.295 20:10:40 -- spdk/autotest.sh@252 -- # run_test sw_hotplug /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh 00:15:42.295 20:10:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:42.295 20:10:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.295 20:10:40 -- common/autotest_common.sh@10 -- # set +x 00:15:42.295 ************************************ 00:15:42.295 START TEST sw_hotplug 00:15:42.295 ************************************ 00:15:42.295 20:10:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh 00:15:42.295 * Looking for test storage... 00:15:42.295 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme 00:15:42.295 20:10:40 -- nvme/sw_hotplug.sh@122 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:15:45.586 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:15:45.586 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:15:45.586 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:15:45.586 20:10:43 -- scripts/common.sh@311 -- # local bdf bdfs 00:15:45.586 20:10:43 -- scripts/common.sh@312 -- # local nvmes 00:15:45.586 20:10:43 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:15:45.586 20:10:43 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:15:45.586 20:10:43 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:15:45.586 20:10:43 -- scripts/common.sh@297 -- # local bdf= 00:15:45.586 20:10:43 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:15:45.586 20:10:43 -- scripts/common.sh@232 -- # local class 00:15:45.586 20:10:43 -- scripts/common.sh@233 -- # local subclass 00:15:45.586 20:10:43 -- scripts/common.sh@234 -- # local progif 00:15:45.586 20:10:43 -- scripts/common.sh@235 -- # printf %02x 1 00:15:45.586 20:10:43 -- scripts/common.sh@235 -- # class=01 00:15:45.586 20:10:43 -- scripts/common.sh@236 -- # printf %02x 8 00:15:45.586 20:10:43 -- scripts/common.sh@236 -- # subclass=08 00:15:45.586 20:10:43 -- scripts/common.sh@237 -- # printf %02x 2 00:15:45.586 20:10:43 -- scripts/common.sh@237 -- # progif=02 00:15:45.586 20:10:43 -- scripts/common.sh@239 -- # hash lspci 00:15:45.586 20:10:43 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:15:45.586 20:10:43 -- scripts/common.sh@242 -- # grep -i -- -p02 00:15:45.586 20:10:43 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:15:45.586 20:10:43 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:15:45.586 20:10:43 -- scripts/common.sh@244 -- # tr -d '"' 00:15:45.586 20:10:43 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:15:45.586 20:10:43 -- scripts/common.sh@300 -- # pci_can_use 0000:5e:00.0 00:15:45.586 20:10:43 -- scripts/common.sh@15 -- # local i 00:15:45.586 20:10:43 -- scripts/common.sh@18 -- # [[ =~ 0000:5e:00.0 ]] 00:15:45.586 20:10:43 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:15:45.586 20:10:43 -- scripts/common.sh@24 -- # return 0 00:15:45.586 20:10:43 -- scripts/common.sh@301 -- # echo 0000:5e:00.0 00:15:45.586 20:10:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:15:45.586 20:10:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:15:45.586 20:10:43 -- scripts/common.sh@322 -- # uname -s 00:15:45.586 20:10:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:15:45.586 20:10:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:15:45.586 20:10:43 -- scripts/common.sh@327 -- # (( 1 )) 00:15:45.586 20:10:43 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@127 -- # nvme_count=1 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:15:45.586 20:10:43 -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:15:45.586 20:10:43 -- common/autotest_common.sh@10 -- # set +x 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=2138717 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:15:48.880 20:10:46 -- common/autotest_common.sh@698 -- # [[ -t 0 ]] 00:15:48.880 20:10:46 -- common/autotest_common.sh@698 -- # exec 00:15:48.880 20:10:46 -- common/autotest_common.sh@700 -- # local time=0 TIMEFORMAT=%2R 00:15:48.880 20:10:46 -- common/autotest_common.sh@706 -- # remove_attach_helper 3 6 false 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:15:48.880 20:10:46 -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:15:48.881 20:10:46 -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:15:48.881 20:10:46 -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:15:48.881 20:10:46 -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:15:48.881 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.881 Initializing NVMe Controllers 00:15:49.819 Attaching to 0000:5e:00.0 00:15:51.725 Attached to 0000:5e:00.0 00:15:51.725 Initialization complete. Starting I/O... 00:15:51.725 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 128 I/Os completed (+128) 00:15:51.725 00:15:52.661 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 3200 I/Os completed (+3072) 00:15:52.661 00:15:54.039 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 6528 I/Os completed (+3328) 00:15:54.039 00:15:54.607 20:10:52 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:15:54.607 20:10:52 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:15:54.607 20:10:52 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:15:54.607 [2024-04-25 20:10:52.447120] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:15:54.607 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:15:54.607 [2024-04-25 20:10:52.447185] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.447210] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.447224] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.447239] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:15:54.607 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:15:54.607 [2024-04-25 20:10:52.448473] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.448502] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.448518] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 [2024-04-25 20:10:52.448534] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:54.607 20:10:52 -- nvme/sw_hotplug.sh@38 -- # false 00:15:54.607 20:10:52 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:15:54.607 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:5e:00.0/vendor 00:15:54.607 EAL: Scan for (pci) bus failed. 00:15:54.866 20:10:52 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:15:54.866 20:10:52 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:15:54.866 20:10:52 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:15:54.866 00:15:55.802 00:15:56.737 00:15:57.673 00:15:58.241 20:10:55 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:15:58.241 20:10:55 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:15:58.241 20:10:55 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:15:58.810 Attaching to 0000:5e:00.0 00:16:00.743 Attached to 0000:5e:00.0 00:16:00.743 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 0 I/Os completed (+0) 00:16:00.743 00:16:01.002 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 128 I/Os completed (+128) 00:16:01.002 00:16:01.002 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 256 I/Os completed (+128) 00:16:01.002 00:16:01.938 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 2944 I/Os completed (+2688) 00:16:01.938 00:16:02.875 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 6134 I/Os completed (+3190) 00:16:02.875 00:16:03.810 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 9334 I/Os completed (+3200) 00:16:03.810 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@56 -- # false 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:16:04.070 [2024-04-25 20:11:01.901555] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:16:04.070 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:04.070 [2024-04-25 20:11:01.901595] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.901618] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.901639] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.901660] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:04.070 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:04.070 [2024-04-25 20:11:01.902772] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.902800] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.902816] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 [2024-04-25 20:11:01.902831] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@38 -- # false 00:16:04.070 20:11:01 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:16:04.329 20:11:02 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:16:04.329 20:11:02 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:16:04.329 20:11:02 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:16:04.898 00:16:05.833 00:16:06.770 00:16:07.708 20:11:05 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:16:07.708 20:11:05 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:16:07.708 20:11:05 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:16:08.276 Attaching to 0000:5e:00.0 00:16:10.812 Attached to 0000:5e:00.0 00:16:10.812 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 0 I/Os completed (+0) 00:16:10.812 00:16:10.812 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 128 I/Os completed (+128) 00:16:10.812 00:16:10.812 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 256 I/Os completed (+128) 00:16:10.812 00:16:10.812 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 1408 I/Os completed (+1152) 00:16:10.812 00:16:11.750 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 4608 I/Os completed (+3200) 00:16:11.750 00:16:13.129 INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ): 7936 I/Os completed (+3328) 00:16:13.129 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@56 -- # false 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:16:13.698 [2024-04-25 20:11:11.383488] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:16:13.698 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:13.698 [2024-04-25 20:11:11.383533] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.383557] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.383572] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.383586] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 Controller removed: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:13.698 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:13.698 [2024-04-25 20:11:11.384782] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.384809] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.384825] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 [2024-04-25 20:11:11.384841] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@38 -- # false 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:16:13.698 20:11:11 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:16:13.698 00:16:15.075 00:16:16.012 00:16:16.946 00:16:16.946 20:11:14 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:16:16.946 20:11:14 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:16:16.946 20:11:14 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:16:18.324 Attaching to 0000:5e:00.0 00:16:20.226 Attached to 0000:5e:00.0 00:16:20.226 unregister_dev: INTEL SSDPE2KX040T8 (BTLJ83030AK84P0DGN ) 00:16:23.572 20:11:20 -- nvme/sw_hotplug.sh@56 -- # false 00:16:23.572 20:11:20 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:16:23.572 20:11:20 -- common/autotest_common.sh@706 -- # time=34.40 00:16:23.572 20:11:20 -- common/autotest_common.sh@708 -- # echo 34.40 00:16:23.572 20:11:20 -- nvme/sw_hotplug.sh@16 -- # helper_time=34.40 00:16:23.572 20:11:20 -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 34.40 1 00:16:23.572 remove_attach_helper took 34.40s to complete (handling 1 nvme drive(s)) 20:11:20 -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@81 -- # kill -0 2138717 00:16:30.145 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (2138717) - No such process 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@83 -- # wait 2138717 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@95 -- # local dev 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=2143147 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:16:30.145 20:11:26 -- nvme/sw_hotplug.sh@101 -- # waitforlisten 2143147 00:16:30.145 20:11:26 -- common/autotest_common.sh@819 -- # '[' -z 2143147 ']' 00:16:30.145 20:11:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.145 20:11:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.145 20:11:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.145 20:11:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.145 20:11:26 -- common/autotest_common.sh@10 -- # set +x 00:16:30.145 [2024-04-25 20:11:26.890539] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:16:30.145 [2024-04-25 20:11:26.890617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143147 ] 00:16:30.145 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.145 [2024-04-25 20:11:26.987952] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.145 [2024-04-25 20:11:27.092816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.145 [2024-04-25 20:11:27.092972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.145 [2024-04-25 20:11:27.303467] 'OCF_Core' volume operations registered 00:16:30.145 [2024-04-25 20:11:27.306974] 'OCF_Cache' volume operations registered 00:16:30.145 [2024-04-25 20:11:27.310959] 'OCF Composite' volume operations registered 00:16:30.145 [2024-04-25 20:11:27.314470] 'SPDK_block_device' volume operations registered 00:16:30.145 20:11:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:30.145 20:11:27 -- common/autotest_common.sh@852 -- # return 0 00:16:30.145 20:11:27 -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:16:30.145 20:11:27 -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:5e:00.0 00:16:30.145 20:11:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.145 20:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.679 Nvme00n1 00:16:32.679 20:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.679 20:11:30 -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:16:32.679 20:11:30 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme00n1 00:16:32.679 20:11:30 -- common/autotest_common.sh@888 -- # local bdev_timeout=6 00:16:32.679 20:11:30 -- common/autotest_common.sh@889 -- # local i 00:16:32.680 20:11:30 -- common/autotest_common.sh@890 -- # [[ -z 6 ]] 00:16:32.680 20:11:30 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:16:32.680 20:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.680 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 20:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.680 20:11:30 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:16:32.680 20:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.680 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:32.680 [ 00:16:32.680 { 00:16:32.680 "name": "Nvme00n1", 00:16:32.680 "aliases": [ 00:16:32.680 "9615e35b-6677-4f69-be16-9ca9fba5ddb6" 00:16:32.680 ], 00:16:32.680 "product_name": "NVMe disk", 00:16:32.680 "block_size": 512, 00:16:32.680 "num_blocks": 7814037168, 00:16:32.680 "uuid": "9615e35b-6677-4f69-be16-9ca9fba5ddb6", 00:16:32.680 "assigned_rate_limits": { 00:16:32.680 "rw_ios_per_sec": 0, 00:16:32.680 "rw_mbytes_per_sec": 0, 00:16:32.680 "r_mbytes_per_sec": 0, 00:16:32.680 "w_mbytes_per_sec": 0 00:16:32.680 }, 00:16:32.680 "claimed": false, 00:16:32.680 "zoned": false, 00:16:32.680 "supported_io_types": { 00:16:32.680 "read": true, 00:16:32.680 "write": true, 00:16:32.680 "unmap": true, 00:16:32.680 "write_zeroes": true, 00:16:32.680 "flush": true, 00:16:32.680 "reset": true, 00:16:32.680 "compare": false, 00:16:32.680 "compare_and_write": false, 00:16:32.680 "abort": true, 00:16:32.680 "nvme_admin": true, 00:16:32.680 "nvme_io": true 00:16:32.680 }, 00:16:32.680 "driver_specific": { 00:16:32.680 "nvme": [ 00:16:32.680 { 00:16:32.680 "pci_address": "0000:5e:00.0", 00:16:32.680 "trid": { 00:16:32.680 "trtype": "PCIe", 00:16:32.680 "traddr": "0000:5e:00.0" 00:16:32.680 }, 00:16:32.680 "ctrlr_data": { 00:16:32.680 "cntlid": 0, 00:16:32.680 "vendor_id": "0x8086", 00:16:32.680 "model_number": "INTEL SSDPE2KX040T8", 00:16:32.680 "serial_number": "BTLJ83030AK84P0DGN", 00:16:32.680 "firmware_revision": "VDV10184", 00:16:32.680 "oacs": { 00:16:32.680 "security": 0, 00:16:32.680 "format": 1, 00:16:32.680 "firmware": 1, 00:16:32.680 "ns_manage": 1 00:16:32.680 }, 00:16:32.680 "multi_ctrlr": false, 00:16:32.680 "ana_reporting": false 00:16:32.680 }, 00:16:32.680 "vs": { 00:16:32.680 "nvme_version": "1.2" 00:16:32.680 }, 00:16:32.680 "ns_data": { 00:16:32.680 "id": 1, 00:16:32.680 "can_share": false 00:16:32.680 } 00:16:32.680 } 00:16:32.680 ], 00:16:32.680 "mp_policy": "active_passive" 00:16:32.680 } 00:16:32.680 } 00:16:32.680 ] 00:16:32.680 20:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.939 20:11:30 -- common/autotest_common.sh@895 -- # return 0 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:16:32.939 20:11:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.939 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:16:32.939 20:11:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:16:32.939 20:11:30 -- common/autotest_common.sh@698 -- # [[ -t 0 ]] 00:16:32.939 20:11:30 -- common/autotest_common.sh@698 -- # exec 00:16:32.939 20:11:30 -- common/autotest_common.sh@700 -- # local time=0 TIMEFORMAT=%2R 00:16:32.939 20:11:30 -- common/autotest_common.sh@706 -- # remove_attach_helper 3 6 true 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:16:32.939 20:11:30 -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:16:39.505 20:11:36 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:16:39.505 20:11:36 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:16:39.505 20:11:36 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:16:39.505 [2024-04-25 20:11:36.677099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:16:39.505 [2024-04-25 20:11:36.677207] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.505 [2024-04-25 20:11:36.677231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.505 [2024-04-25 20:11:36.677248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.505 [2024-04-25 20:11:36.677272] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.505 [2024-04-25 20:11:36.677287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.505 [2024-04-25 20:11:36.677301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.505 [2024-04-25 20:11:36.677317] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.505 [2024-04-25 20:11:36.677332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.505 [2024-04-25 20:11:36.677347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.505 [2024-04-25 20:11:36.677362] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:39.505 [2024-04-25 20:11:36.677374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.505 [2024-04-25 20:11:36.677389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.505 20:11:36 -- nvme/sw_hotplug.sh@38 -- # true 00:16:39.505 20:11:36 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:16:44.775 20:11:42 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:16:44.775 20:11:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.775 20:11:42 -- nvme/sw_hotplug.sh@40 -- # jq length 00:16:44.775 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:45.034 20:11:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:45.034 20:11:42 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:16:45.034 20:11:42 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:16:45.034 20:11:42 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:16:45.034 20:11:42 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:16:45.034 20:11:42 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:16:48.322 20:11:46 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:16:48.322 20:11:46 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:16:48.322 20:11:46 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@56 -- # true 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:16:54.889 20:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.889 20:11:52 -- common/autotest_common.sh@10 -- # set +x 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@58 -- # sort 00:16:54.889 20:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:16:54.889 [2024-04-25 20:11:52.296527] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:16:54.889 [2024-04-25 20:11:52.296625] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:54.889 [2024-04-25 20:11:52.296655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.889 [2024-04-25 20:11:52.296672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.889 [2024-04-25 20:11:52.296695] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:54.889 [2024-04-25 20:11:52.296709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.889 [2024-04-25 20:11:52.296723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.889 [2024-04-25 20:11:52.296740] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:54.889 [2024-04-25 20:11:52.296753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.889 [2024-04-25 20:11:52.296768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.889 [2024-04-25 20:11:52.296784] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:16:54.889 [2024-04-25 20:11:52.296798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.889 [2024-04-25 20:11:52.296813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@38 -- # true 00:16:54.889 20:11:52 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:17:01.474 20:11:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@40 -- # jq length 00:17:01.474 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:17:01.474 20:11:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:17:01.474 20:11:58 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:17:04.011 20:12:01 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:17:04.011 20:12:01 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:17:04.011 20:12:01 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@56 -- # true 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:17:10.579 20:12:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.579 20:12:07 -- common/autotest_common.sh@10 -- # set +x 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@58 -- # sort 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:17:10.579 20:12:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:17:10.579 [2024-04-25 20:12:07.912537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:17:10.579 [2024-04-25 20:12:07.912660] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:10.579 [2024-04-25 20:12:07.912684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.579 [2024-04-25 20:12:07.912702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.579 [2024-04-25 20:12:07.912726] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:10.579 [2024-04-25 20:12:07.912741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.579 [2024-04-25 20:12:07.912756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.579 [2024-04-25 20:12:07.912771] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:10.579 [2024-04-25 20:12:07.912784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.579 [2024-04-25 20:12:07.912800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.579 [2024-04-25 20:12:07.912817] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:10.579 [2024-04-25 20:12:07.912830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:10.579 [2024-04-25 20:12:07.912843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@38 -- # true 00:17:10.579 20:12:07 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:17:17.169 20:12:13 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:17:17.169 20:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:17.169 20:12:13 -- nvme/sw_hotplug.sh@40 -- # jq length 00:17:17.169 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:17:17.169 20:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:17.169 20:12:13 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:17:17.169 20:12:13 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:17:17.169 20:12:14 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:17:17.169 20:12:14 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:17:17.169 20:12:14 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:17:19.703 20:12:17 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:17:19.703 20:12:17 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:17:19.703 20:12:17 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:17:26.271 20:12:23 -- nvme/sw_hotplug.sh@56 -- # true 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:17:26.272 20:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:17:26.272 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@58 -- # sort 00:17:26.272 20:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:17:26.272 20:12:23 -- common/autotest_common.sh@706 -- # time=52.86 00:17:26.272 20:12:23 -- common/autotest_common.sh@708 -- # echo 52.86 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@16 -- # helper_time=52.86 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.86 1 00:17:26.272 remove_attach_helper took 52.86s to complete (handling 1 nvme drive(s)) 20:12:23 -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:17:26.272 20:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.272 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.272 20:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:17:26.272 20:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:26.272 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:17:26.272 20:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:17:26.272 20:12:23 -- common/autotest_common.sh@698 -- # [[ -t 0 ]] 00:17:26.272 20:12:23 -- common/autotest_common.sh@698 -- # exec 00:17:26.272 20:12:23 -- common/autotest_common.sh@700 -- # local time=0 TIMEFORMAT=%2R 00:17:26.272 20:12:23 -- common/autotest_common.sh@706 -- # remove_attach_helper 3 6 true 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:17:26.272 20:12:23 -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:17:32.835 20:12:29 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:17:32.835 20:12:29 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:17:32.835 20:12:29 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:17:32.835 [2024-04-25 20:12:29.622901] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:17:32.835 [2024-04-25 20:12:29.623015] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:32.835 [2024-04-25 20:12:29.623038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.836 [2024-04-25 20:12:29.623056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.836 [2024-04-25 20:12:29.623078] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:32.836 [2024-04-25 20:12:29.623092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.836 [2024-04-25 20:12:29.623107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.836 [2024-04-25 20:12:29.623122] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:32.836 [2024-04-25 20:12:29.623135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.836 [2024-04-25 20:12:29.623151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.836 [2024-04-25 20:12:29.623166] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:32.836 [2024-04-25 20:12:29.623179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.836 [2024-04-25 20:12:29.623198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.836 20:12:29 -- nvme/sw_hotplug.sh@38 -- # true 00:17:32.836 20:12:29 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:17:38.135 20:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@40 -- # jq length 00:17:38.135 20:12:35 -- common/autotest_common.sh@10 -- # set +x 00:17:38.135 20:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:17:38.135 20:12:35 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:17:41.422 20:12:39 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:17:41.422 20:12:39 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:17:41.422 20:12:39 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@56 -- # true 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:17:47.989 20:12:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:17:47.989 20:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@58 -- # sort 00:17:47.989 20:12:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:17:47.989 [2024-04-25 20:12:45.232499] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:17:47.989 [2024-04-25 20:12:45.232597] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:47.989 [2024-04-25 20:12:45.232620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.989 [2024-04-25 20:12:45.232644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.989 [2024-04-25 20:12:45.232667] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:47.989 [2024-04-25 20:12:45.232680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.989 [2024-04-25 20:12:45.232695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.989 [2024-04-25 20:12:45.232711] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:47.989 [2024-04-25 20:12:45.232724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.989 [2024-04-25 20:12:45.232738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.989 [2024-04-25 20:12:45.232754] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:17:47.989 [2024-04-25 20:12:45.232768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:47.989 [2024-04-25 20:12:45.232782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@38 -- # true 00:17:47.989 20:12:45 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:17:54.557 20:12:51 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:17:54.557 20:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:54.557 20:12:51 -- nvme/sw_hotplug.sh@40 -- # jq length 00:17:54.558 20:12:51 -- common/autotest_common.sh@10 -- # set +x 00:17:54.558 20:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:54.558 20:12:51 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:17:54.558 20:12:51 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:17:54.558 20:12:51 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:17:54.558 20:12:51 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:17:54.558 20:12:51 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:17:57.088 20:12:54 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:17:57.088 20:12:54 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:17:57.088 20:12:54 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@56 -- # true 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:18:03.653 20:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.653 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@58 -- # sort 00:18:03.653 20:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@35 -- # echo 1 00:18:03.653 [2024-04-25 20:13:00.848583] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state. 00:18:03.653 [2024-04-25 20:13:00.848694] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:03.653 [2024-04-25 20:13:00.848718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.653 [2024-04-25 20:13:00.848736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.653 [2024-04-25 20:13:00.848759] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:03.653 [2024-04-25 20:13:00.848773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.653 [2024-04-25 20:13:00.848788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.653 [2024-04-25 20:13:00.848803] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:03.653 [2024-04-25 20:13:00.848816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.653 [2024-04-25 20:13:00.848832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.653 [2024-04-25 20:13:00.848849] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:18:03.653 [2024-04-25 20:13:00.848862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:03.653 [2024-04-25 20:13:00.848876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@38 -- # true 00:18:03.653 20:13:00 -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:18:08.922 20:13:06 -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:18:09.181 20:13:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:09.181 20:13:06 -- nvme/sw_hotplug.sh@40 -- # jq length 00:18:09.181 20:13:06 -- common/autotest_common.sh@10 -- # set +x 00:18:09.181 20:13:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:09.181 20:13:06 -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:18:09.181 20:13:06 -- nvme/sw_hotplug.sh@44 -- # echo 1 00:18:09.181 20:13:07 -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:18:09.181 20:13:07 -- nvme/sw_hotplug.sh@47 -- # echo vfio-pci 00:18:09.181 20:13:07 -- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0 00:18:12.495 20:13:10 -- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0 00:18:12.495 20:13:10 -- nvme/sw_hotplug.sh@50 -- # echo '' 00:18:12.495 20:13:10 -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@56 -- # true 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:18:19.062 20:13:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:19.062 20:13:16 -- common/autotest_common.sh@10 -- # set +x 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@58 -- # sort 00:18:19.062 20:13:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:18:19.062 20:13:16 -- common/autotest_common.sh@706 -- # time=52.87 00:18:19.062 20:13:16 -- common/autotest_common.sh@708 -- # echo 52.87 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@16 -- # helper_time=52.87 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.87 1 00:18:19.062 remove_attach_helper took 52.87s to complete (handling 1 nvme drive(s)) 20:13:16 -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:18:19.062 20:13:16 -- nvme/sw_hotplug.sh@118 -- # killprocess 2143147 00:18:19.062 20:13:16 -- common/autotest_common.sh@926 -- # '[' -z 2143147 ']' 00:18:19.062 20:13:16 -- common/autotest_common.sh@930 -- # kill -0 2143147 00:18:19.062 20:13:16 -- common/autotest_common.sh@931 -- # uname 00:18:19.062 20:13:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:19.062 20:13:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2143147 00:18:19.062 20:13:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:19.062 20:13:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:19.062 20:13:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2143147' 00:18:19.062 killing process with pid 2143147 00:18:19.062 20:13:16 -- common/autotest_common.sh@945 -- # kill 2143147 00:18:19.062 20:13:16 -- common/autotest_common.sh@950 -- # wait 2143147 00:18:23.254 00:18:23.254 real 2m40.661s 00:18:23.254 user 1m46.048s 00:18:23.254 sys 0m41.878s 00:18:23.254 20:13:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:23.254 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:23.254 ************************************ 00:18:23.254 END TEST sw_hotplug 00:18:23.254 ************************************ 00:18:23.254 20:13:20 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:18:23.254 20:13:20 -- spdk/autotest.sh@264 -- # '[' 1 -eq 1 ']' 00:18:23.254 20:13:20 -- spdk/autotest.sh@265 -- # run_test ioat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh 00:18:23.254 20:13:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:23.254 20:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:23.254 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:23.254 ************************************ 00:18:23.254 START TEST ioat 00:18:23.254 ************************************ 00:18:23.254 20:13:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh 00:18:23.254 * Looking for test storage... 00:18:23.254 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat 00:18:23.254 20:13:20 -- ioat/ioat.sh@10 -- # run_test ioat_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1 00:18:23.254 20:13:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:23.254 20:13:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:23.254 20:13:20 -- common/autotest_common.sh@10 -- # set +x 00:18:23.254 ************************************ 00:18:23.254 START TEST ioat_perf 00:18:23.254 ************************************ 00:18:23.254 20:13:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1 00:18:23.254 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.632 [2024-04-25 20:13:22.504934] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.504995] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505008] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505020] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505031] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505047] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505058] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505069] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505081] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505093] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505104] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505115] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505126] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505138] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505149] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown! 00:18:24.632 [2024-04-25 20:13:22.505160] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown! 00:18:24.632 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021 00:18:24.632 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021 00:18:24.632 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021 00:18:24.632 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021 00:18:24.633 Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021 00:18:24.633 User configuration: 00:18:24.633 Number of channels: 1 00:18:24.633 Transfer size: 4096 bytes 00:18:24.633 Queue depth: 256 00:18:24.633 Run time: 1 seconds 00:18:24.633 Core mask: 0x1 00:18:24.633 Verify: No 00:18:24.633 00:18:24.633 Associating ioat_channel 0 with core 0 00:18:24.633 Starting thread on core 0 00:18:24.633 Channel_ID Core Transfers Bandwidth Failed 00:18:24.633 ----------------------------------------------------------- 00:18:24.633 0 0 691712/s 2702 MiB/s 0 00:18:24.633 =========================================================== 00:18:24.633 Total: 691712/s 2702 MiB/s 0 00:18:24.633 00:18:24.633 real 0m1.661s 00:18:24.633 user 0m1.326s 00:18:24.633 sys 0m0.147s 00:18:24.633 20:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:24.633 20:13:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.633 ************************************ 00:18:24.633 END TEST ioat_perf 00:18:24.633 ************************************ 00:18:24.633 20:13:22 -- ioat/ioat.sh@12 -- # run_test ioat_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1 00:18:24.633 20:13:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:24.633 20:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:24.633 20:13:22 -- common/autotest_common.sh@10 -- # set +x 00:18:24.633 ************************************ 00:18:24.633 START TEST ioat_verify 00:18:24.633 ************************************ 00:18:24.633 20:13:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1 00:18:24.892 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.796 [2024-04-25 20:13:24.268401] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268490] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268504] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268515] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268533] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268544] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268555] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268566] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268578] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268589] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268600] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268611] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268622] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268639] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268651] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown! 00:18:26.796 [2024-04-25 20:13:24.268662] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown! 00:18:26.796 Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021 00:18:26.796 Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021 00:18:26.796 User configuration: 00:18:26.796 Run time: 1 seconds 00:18:26.796 Core mask: 0x1 00:18:26.796 Queue depth: 32 00:18:26.796 lcore = 0, copy success = 542, copy failed = 0, fill success = 542, fill failed = 0 00:18:26.796 00:18:26.796 real 0m1.718s 00:18:26.796 user 0m1.381s 00:18:26.796 sys 0m0.146s 00:18:26.796 20:13:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.796 20:13:24 -- common/autotest_common.sh@10 -- # set +x 00:18:26.796 ************************************ 00:18:26.796 END TEST ioat_verify 00:18:26.796 ************************************ 00:18:26.796 00:18:26.796 real 0m3.559s 00:18:26.796 user 0m2.774s 00:18:26.796 sys 0m0.434s 00:18:26.796 20:13:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.796 20:13:24 -- common/autotest_common.sh@10 -- # set +x 00:18:26.796 ************************************ 00:18:26.796 END TEST ioat 00:18:26.796 ************************************ 00:18:26.796 20:13:24 -- spdk/autotest.sh@268 -- # timing_exit lib 00:18:26.796 20:13:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:26.796 20:13:24 -- common/autotest_common.sh@10 -- # set +x 00:18:26.796 20:13:24 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:18:26.796 20:13:24 -- spdk/autotest.sh@339 -- # run_test ocf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh 00:18:26.796 20:13:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:26.796 20:13:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.796 20:13:24 -- common/autotest_common.sh@10 -- # set +x 00:18:26.796 ************************************ 00:18:26.796 START TEST ocf 00:18:26.796 ************************************ 00:18:26.796 20:13:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh 00:18:26.796 * Looking for test storage... 00:18:26.796 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf 00:18:26.796 20:13:24 -- ocf/ocf.sh@11 -- # run_test ocf_fio_modes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh 00:18:26.796 20:13:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:26.796 20:13:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.796 20:13:24 -- common/autotest_common.sh@10 -- # set +x 00:18:26.796 ************************************ 00:18:26.796 START TEST ocf_fio_modes 00:18:26.796 ************************************ 00:18:26.796 20:13:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh 00:18:26.796 20:13:24 -- ocf/common.sh@9 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:18:26.796 20:13:24 -- integrity/fio-modes.sh@20 -- # clear_nvme 00:18:26.796 20:13:24 -- ocf/common.sh@12 -- # mapfile -t bdf 00:18:26.796 20:13:24 -- ocf/common.sh@12 -- # get_first_nvme_bdf 00:18:26.796 20:13:24 -- common/autotest_common.sh@1509 -- # bdfs=() 00:18:26.796 20:13:24 -- common/autotest_common.sh@1509 -- # local bdfs 00:18:26.796 20:13:24 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:18:26.796 20:13:24 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:18:26.796 20:13:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:26.796 20:13:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:18:26.796 20:13:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:26.796 20:13:24 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:26.796 20:13:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:26.796 20:13:24 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:18:26.796 20:13:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:18:26.796 20:13:24 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:18:26.796 20:13:24 -- ocf/common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset 00:18:30.083 Waiting for block devices as requested 00:18:30.083 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:18:30.083 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:30.083 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:30.083 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:30.083 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:30.342 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:30.342 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:30.342 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:30.600 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:30.600 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:18:30.600 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:18:30.859 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:18:30.859 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:18:30.859 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:18:31.118 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:18:31.118 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:18:31.118 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:18:31.118 20:13:29 -- ocf/common.sh@17 -- # get_nvme_name_from_bdf 0000:5e:00.0 00:18:31.118 20:13:29 -- common/autotest_common.sh@1466 -- # blkname=() 00:18:31.118 20:13:29 -- common/autotest_common.sh@1468 -- # lsblk -d --output NAME 00:18:31.118 20:13:29 -- common/autotest_common.sh@1468 -- # grep '^nvme' 00:18:31.118 20:13:29 -- common/autotest_common.sh@1468 -- # nvme_devs=nvme0n1 00:18:31.118 20:13:29 -- common/autotest_common.sh@1469 -- # '[' -z nvme0n1 ']' 00:18:31.118 20:13:29 -- common/autotest_common.sh@1472 -- # for dev in $nvme_devs 00:18:31.118 20:13:29 -- common/autotest_common.sh@1473 -- # readlink /sys/block/nvme0n1/device/device 00:18:31.118 20:13:29 -- common/autotest_common.sh@1473 -- # link_name=../../../0000:5e:00.0 00:18:31.118 20:13:29 -- common/autotest_common.sh@1474 -- # '[' -z ../../../0000:5e:00.0 ']' 00:18:31.118 20:13:29 -- common/autotest_common.sh@1477 -- # basename ../../../0000:5e:00.0 00:18:31.118 20:13:29 -- common/autotest_common.sh@1477 -- # bdf=0000:5e:00.0 00:18:31.118 20:13:29 -- common/autotest_common.sh@1478 -- # '[' 0000:5e:00.0 = 0000:5e:00.0 ']' 00:18:31.118 20:13:29 -- common/autotest_common.sh@1479 -- # blkname+=($dev) 00:18:31.118 20:13:29 -- common/autotest_common.sh@1483 -- # printf '%s\n' nvme0n1 00:18:31.118 20:13:29 -- ocf/common.sh@17 -- # name=nvme0n1 00:18:31.118 20:13:29 -- ocf/common.sh@18 -- # lsblk /dev/nvme0n1 --output MOUNTPOINT -n 00:18:31.118 20:13:29 -- ocf/common.sh@18 -- # wc -w 00:18:31.376 20:13:29 -- ocf/common.sh@18 -- # mountpoints=0 00:18:31.376 20:13:29 -- ocf/common.sh@19 -- # '[' 0 '!=' 0 ']' 00:18:31.376 20:13:29 -- ocf/common.sh@22 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1000 oflag=direct 00:18:31.635 1000+0 records in 00:18:31.635 1000+0 records out 00:18:31.635 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.469506 s, 2.2 GB/s 00:18:31.635 20:13:29 -- ocf/common.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:18:34.922 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:18:34.922 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:18:38.206 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:18:38.206 20:13:36 -- integrity/fio-modes.sh@22 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:38.206 20:13:36 -- integrity/fio-modes.sh@25 -- # xtrace_disable 00:18:38.206 20:13:36 -- common/autotest_common.sh@10 -- # set +x 00:18:38.465 { 00:18:38.465 "subsystems": [ 00:18:38.465 { 00:18:38.465 "subsystem": "bdev", 00:18:38.465 "config": [ 00:18:38.465 { 00:18:38.465 "method": "bdev_nvme_attach_controller", 00:18:38.465 "params": { 00:18:38.465 "trtype": "PCIe", 00:18:38.465 "name": "Nvme0", 00:18:38.465 "traddr": "0000:5e:00.0" 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_split_create", 00:18:38.465 "params": { 00:18:38.465 "base_bdev": "Nvme0n1", 00:18:38.465 "split_count": 8, 00:18:38.465 "split_size_mb": 101 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_ocf_create", 00:18:38.465 "params": { 00:18:38.465 "name": "PT_Nvme", 00:18:38.465 "mode": "pt", 00:18:38.465 "cache_bdev_name": "Nvme0n1p0", 00:18:38.465 "core_bdev_name": "Nvme0n1p1" 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_ocf_create", 00:18:38.465 "params": { 00:18:38.465 "name": "WT_Nvme", 00:18:38.465 "mode": "wt", 00:18:38.465 "cache_bdev_name": "Nvme0n1p2", 00:18:38.465 "core_bdev_name": "Nvme0n1p3" 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_ocf_create", 00:18:38.465 "params": { 00:18:38.465 "name": "WB_Nvme0", 00:18:38.465 "mode": "wb", 00:18:38.465 "cache_bdev_name": "Nvme0n1p4", 00:18:38.465 "core_bdev_name": "Nvme0n1p5" 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_ocf_create", 00:18:38.465 "params": { 00:18:38.465 "name": "WB_Nvme1", 00:18:38.465 "mode": "wb", 00:18:38.465 "cache_bdev_name": "Nvme0n1p6", 00:18:38.465 "core_bdev_name": "Nvme0n1p7" 00:18:38.465 } 00:18:38.465 }, 00:18:38.465 { 00:18:38.465 "method": "bdev_wait_for_examine" 00:18:38.465 } 00:18:38.465 ] 00:18:38.465 } 00:18:38.465 ] 00:18:38.465 } 00:18:38.465 20:13:36 -- integrity/fio-modes.sh@100 -- # fio_verify --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:18:38.465 20:13:36 -- integrity/fio-modes.sh@12 -- # fio_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:18:38.465 20:13:36 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:18:38.465 20:13:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:18:38.465 20:13:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:38.465 20:13:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:18:38.465 20:13:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:18:38.465 20:13:36 -- common/autotest_common.sh@1320 -- # shift 00:18:38.465 20:13:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:18:38.465 20:13:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:38.465 20:13:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:38.465 20:13:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:18:38.465 20:13:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:18:38.465 20:13:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:18:38.465 20:13:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev' 00:18:38.465 20:13:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1 00:18:38.724 randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.724 randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.724 write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.724 rw: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.724 randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:38.724 fio-3.35 00:18:38.724 Starting 5 threads 00:18:38.724 EAL: No free 2048 kB hugepages reported on node 1 00:18:53.671 00:18:53.671 randwrite: (groupid=0, jobs=5): err= 0: pid=2161302: Thu Apr 25 20:13:50 2024 00:18:53.671 read: IOPS=23.2k, BW=90.5MiB/s (94.9MB/s)(905MiB/10003msec) 00:18:53.671 slat (usec): min=5, max=417, avg=33.55, stdev=26.21 00:18:53.671 clat (usec): min=50, max=28010, avg=6951.74, stdev=3953.87 00:18:53.672 lat (usec): min=103, max=28031, avg=6985.29, stdev=3955.36 00:18:53.672 clat percentiles (usec): 00:18:53.672 | 1.00th=[ 375], 5.00th=[ 783], 10.00th=[ 1483], 20.00th=[ 3425], 00:18:53.672 | 30.00th=[ 4686], 40.00th=[ 5800], 50.00th=[ 6849], 60.00th=[ 7767], 00:18:53.672 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[12387], 95.00th=[14222], 00:18:53.672 | 99.00th=[16581], 99.50th=[17957], 99.90th=[22938], 99.95th=[24249], 00:18:53.672 | 99.99th=[25560] 00:18:53.672 bw ( KiB/s): min= 4880, max=35920, per=22.63%, avg=20960.00, stdev=3389.88, samples=77 00:18:53.672 iops : min= 1220, max= 8980, avg=5240.00, stdev=847.47, samples=77 00:18:53.672 write: IOPS=19.2k, BW=75.0MiB/s (78.6MB/s)(748MiB/9983msec); 0 zone resets 00:18:53.672 slat (usec): min=8, max=353, avg=30.74, stdev=20.16 00:18:53.672 clat (usec): min=48, max=89031, avg=8236.67, stdev=8051.64 00:18:53.672 lat (usec): min=74, max=89065, avg=8267.41, stdev=8057.15 00:18:53.672 clat percentiles (usec): 00:18:53.672 | 1.00th=[ 85], 5.00th=[ 110], 10.00th=[ 155], 20.00th=[ 375], 00:18:53.672 | 30.00th=[ 1647], 40.00th=[ 5080], 50.00th=[ 7242], 60.00th=[ 9241], 00:18:53.672 | 70.00th=[11076], 80.00th=[13304], 90.00th=[18220], 95.00th=[23200], 00:18:53.672 | 99.00th=[35914], 99.50th=[41681], 99.90th=[53740], 99.95th=[57934], 00:18:53.672 | 99.99th=[68682] 00:18:53.672 bw ( KiB/s): min=43784, max=108336, per=100.00%, avg=77351.26, stdev=4459.64, samples=93 00:18:53.672 iops : min=10946, max=27084, avg=19337.81, stdev=1114.91, samples=93 00:18:53.672 lat (usec) : 50=0.01%, 100=1.59%, 250=6.79%, 500=2.48%, 750=2.56% 00:18:53.672 lat (usec) : 1000=2.25% 00:18:53.672 lat (msec) : 2=5.20%, 4=9.00%, 10=43.20%, 20=23.17%, 50=3.69% 00:18:53.672 lat (msec) : 100=0.07% 00:18:53.672 cpu : usr=99.53%, sys=0.01%, ctx=247, majf=0, minf=523 00:18:53.672 IO depths : 1=6.4%, 2=5.0%, 4=5.5%, 8=7.8%, 16=10.0%, 32=17.8%, >=64=47.4% 00:18:53.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.672 complete : 0=0.0%, 4=97.5%, 8=0.7%, 16=0.5%, 32=0.7%, 64=0.5%, >=64=0.3% 00:18:53.672 issued rwts: total=231647,191570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.672 00:18:53.672 Run status group 0 (all jobs): 00:18:53.672 READ: bw=90.5MiB/s (94.9MB/s), 90.5MiB/s-90.5MiB/s (94.9MB/s-94.9MB/s), io=905MiB (949MB), run=10003-10003msec 00:18:53.672 WRITE: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=748MiB (785MB), run=9983-9983msec 00:18:58.939 20:13:56 -- integrity/fio-modes.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:18:58.939 20:13:56 -- integrity/fio-modes.sh@103 -- # cleanup 00:18:58.939 20:13:56 -- integrity/fio-modes.sh@16 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf 00:18:58.939 00:18:58.939 real 0m32.335s 00:18:58.939 user 1m8.056s 00:18:58.939 sys 0m5.939s 00:18:58.939 20:13:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.939 20:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:58.939 ************************************ 00:18:58.939 END TEST ocf_fio_modes 00:18:58.939 ************************************ 00:18:59.197 20:13:56 -- ocf/ocf.sh@12 -- # run_test ocf_bdevperf_iotypes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh 00:18:59.197 20:13:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:18:59.197 20:13:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.197 20:13:56 -- common/autotest_common.sh@10 -- # set +x 00:18:59.197 ************************************ 00:18:59.197 START TEST ocf_bdevperf_iotypes 00:18:59.197 ************************************ 00:18:59.197 20:13:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh 00:18:59.197 20:13:56 -- integrity/bdevperf-iotypes.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:18:59.197 20:13:56 -- integrity/bdevperf-iotypes.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf 00:18:59.197 20:13:56 -- integrity/bdevperf-iotypes.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w flush 00:18:59.197 20:13:56 -- integrity/bdevperf-iotypes.sh@13 -- # gen_malloc_ocf_json 00:18:59.197 20:13:56 -- integrity/mallocs.conf@2 -- # local size=300 00:18:59.197 20:13:56 -- integrity/mallocs.conf@3 -- # local block_size=512 00:18:59.197 20:13:56 -- integrity/mallocs.conf@4 -- # local config 00:18:59.197 20:13:56 -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:18:59.197 { 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc$malloc", 00:18:59.197 "num_blocks": $(( (size << 20) / block_size )), 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 } 00:18:59.197 JSON 00:18:59.197 )") 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # cat 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:18:59.197 { 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc$malloc", 00:18:59.197 "num_blocks": $(( (size << 20) / block_size )), 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 } 00:18:59.197 JSON 00:18:59.197 )") 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # cat 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:18:59.197 { 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc$malloc", 00:18:59.197 "num_blocks": $(( (size << 20) / block_size )), 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 } 00:18:59.197 JSON 00:18:59.197 )") 00:18:59.197 20:13:56 -- integrity/mallocs.conf@21 -- # cat 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:18:59.197 20:13:56 -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:18:59.197 20:13:56 -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:18:59.197 20:13:56 -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:18:59.197 20:13:56 -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:18:59.197 20:13:56 -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:18:59.197 20:13:56 -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:18:59.197 20:13:56 -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:18:59.197 20:13:56 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:18:59.197 20:13:56 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:18:59.197 { 00:18:59.197 "method": "bdev_ocf_create", 00:18:59.197 "params": { 00:18:59.197 "name": "MalCache$ocf", 00:18:59.197 "mode": "${ocf_mode[ocf]}", 00:18:59.197 "cache_bdev_name": "${ocf_cache[ocf]}", 00:18:59.197 "core_bdev_name": "${ocf_core[ocf]}" 00:18:59.197 } 00:18:59.197 } 00:18:59.197 JSON 00:18:59.197 )") 00:18:59.197 20:13:56 -- integrity/mallocs.conf@44 -- # cat 00:18:59.197 20:13:56 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:18:59.197 20:13:56 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:18:59.197 { 00:18:59.197 "method": "bdev_ocf_create", 00:18:59.197 "params": { 00:18:59.197 "name": "MalCache$ocf", 00:18:59.197 "mode": "${ocf_mode[ocf]}", 00:18:59.197 "cache_bdev_name": "${ocf_cache[ocf]}", 00:18:59.197 "core_bdev_name": "${ocf_core[ocf]}" 00:18:59.197 } 00:18:59.197 } 00:18:59.197 JSON 00:18:59.197 )") 00:18:59.197 20:13:56 -- integrity/mallocs.conf@44 -- # cat 00:18:59.197 20:13:56 -- integrity/mallocs.conf@47 -- # jq . 00:18:59.197 20:13:56 -- integrity/mallocs.conf@47 -- # IFS=, 00:18:59.197 20:13:56 -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc0", 00:18:59.197 "num_blocks": 614400, 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 },{ 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc1", 00:18:59.197 "num_blocks": 614400, 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 },{ 00:18:59.197 "method": "bdev_malloc_create", 00:18:59.197 "params": { 00:18:59.197 "name": "Malloc2", 00:18:59.197 "num_blocks": 614400, 00:18:59.197 "block_size": 512 00:18:59.197 } 00:18:59.197 },{ 00:18:59.197 "method": "bdev_ocf_create", 00:18:59.197 "params": { 00:18:59.197 "name": "MalCache1", 00:18:59.197 "mode": "wt", 00:18:59.197 "cache_bdev_name": "Malloc0", 00:18:59.197 "core_bdev_name": "Malloc1" 00:18:59.197 } 00:18:59.197 },{ 00:18:59.197 "method": "bdev_ocf_create", 00:18:59.197 "params": { 00:18:59.197 "name": "MalCache2", 00:18:59.197 "mode": "pt", 00:18:59.197 "cache_bdev_name": "Malloc0", 00:18:59.197 "core_bdev_name": "Malloc2" 00:18:59.197 } 00:18:59.197 }' 00:18:59.197 [2024-04-25 20:13:57.006830] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:18:59.197 [2024-04-25 20:13:57.006907] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163474 ] 00:18:59.197 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.197 [2024-04-25 20:13:57.104104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.455 [2024-04-25 20:13:57.201438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.455 [2024-04-25 20:13:57.383275] 'OCF_Core' volume operations registered 00:18:59.455 [2024-04-25 20:13:57.386522] 'OCF_Cache' volume operations registered 00:18:59.455 [2024-04-25 20:13:57.390195] 'OCF Composite' volume operations registered 00:18:59.713 [2024-04-25 20:13:57.393465] 'SPDK_block_device' volume operations registered 00:18:59.713 [2024-04-25 20:13:57.634360] Inserting cache MalCache1 00:18:59.713 [2024-04-25 20:13:57.634859] MalCache1: Metadata initialized 00:18:59.713 [2024-04-25 20:13:57.635304] MalCache1: Successfully added 00:18:59.713 [2024-04-25 20:13:57.635318] MalCache1: Cache mode : wt 00:18:59.713 [2024-04-25 20:13:57.646289] MalCache1: Super block config offset : 0 kiB 00:18:59.713 [2024-04-25 20:13:57.646312] MalCache1: Super block config size : 2200 B 00:18:59.713 [2024-04-25 20:13:57.646319] MalCache1: Super block runtime offset : 128 kiB 00:18:59.713 [2024-04-25 20:13:57.646326] MalCache1: Super block runtime size : 4 B 00:18:59.713 [2024-04-25 20:13:57.646333] MalCache1: Reserved offset : 256 kiB 00:18:59.713 [2024-04-25 20:13:57.646339] MalCache1: Reserved size : 128 kiB 00:18:59.713 [2024-04-25 20:13:57.646346] MalCache1: Part config offset : 384 kiB 00:18:59.713 [2024-04-25 20:13:57.646352] MalCache1: Part config size : 48 kiB 00:18:59.713 [2024-04-25 20:13:57.646359] MalCache1: Part runtime offset : 640 kiB 00:18:59.713 [2024-04-25 20:13:57.646365] MalCache1: Part runtime size : 72 kiB 00:18:59.714 [2024-04-25 20:13:57.646372] MalCache1: Core config offset : 768 kiB 00:18:59.714 [2024-04-25 20:13:57.646378] MalCache1: Core config size : 512 kiB 00:18:59.714 [2024-04-25 20:13:57.646385] MalCache1: Core runtime offset : 1792 kiB 00:18:59.714 [2024-04-25 20:13:57.646397] MalCache1: Core runtime size : 1172 kiB 00:18:59.714 [2024-04-25 20:13:57.646404] MalCache1: Core UUID offset : 3072 kiB 00:18:59.714 [2024-04-25 20:13:57.646411] MalCache1: Core UUID size : 16384 kiB 00:18:59.714 [2024-04-25 20:13:57.646417] MalCache1: Cleaning offset : 35840 kiB 00:18:59.714 [2024-04-25 20:13:57.646424] MalCache1: Cleaning size : 788 kiB 00:18:59.714 [2024-04-25 20:13:57.646430] MalCache1: LRU list offset : 36736 kiB 00:18:59.714 [2024-04-25 20:13:57.646437] MalCache1: LRU list size : 592 kiB 00:18:59.714 [2024-04-25 20:13:57.646443] MalCache1: Collision offset : 37376 kiB 00:18:59.714 [2024-04-25 20:13:57.646449] MalCache1: Collision size : 788 kiB 00:18:59.714 [2024-04-25 20:13:57.646456] MalCache1: List info offset : 38272 kiB 00:18:59.714 [2024-04-25 20:13:57.646462] MalCache1: List info size : 592 kiB 00:18:59.714 [2024-04-25 20:13:57.646469] MalCache1: Hash offset : 38912 kiB 00:18:59.714 [2024-04-25 20:13:57.646475] MalCache1: Hash size : 68 kiB 00:18:59.714 [2024-04-25 20:13:57.646482] MalCache1: Cache line size: 4 kiB 00:18:59.714 [2024-04-25 20:13:57.646491] MalCache1: Metadata capacity: 20 MiB 00:18:59.972 [2024-04-25 20:13:57.657049] MalCache1: Policy 'always' initialized successfully 00:18:59.972 [2024-04-25 20:13:57.869235] MalCache1: Done saving cache state! 00:18:59.972 [2024-04-25 20:13:57.900764] MalCache1: Cache attached 00:18:59.972 [2024-04-25 20:13:57.900861] MalCache1: Successfully attached 00:18:59.972 [2024-04-25 20:13:57.901146] MalCache1: Inserting core Malloc1 00:18:59.972 [2024-04-25 20:13:57.901170] MalCache1.Malloc1: Seqential cutoff init 00:19:00.231 [2024-04-25 20:13:57.933025] MalCache1.Malloc1: Successfully added 00:19:00.231 [2024-04-25 20:13:57.938948] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:19:00.231 [2024-04-25 20:13:57.939190] MalCache1: Inserting core Malloc2 00:19:00.231 [2024-04-25 20:13:57.939214] MalCache1.Malloc2: Seqential cutoff init 00:19:00.231 [2024-04-25 20:13:57.971726] MalCache1.Malloc2: Successfully added 00:19:00.231 Running I/O for 4 seconds... 00:19:04.418 00:19:04.418 Latency(us) 00:19:04.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.418 Job: MalCache1 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:19:04.418 MalCache1 : 4.00 29995.85 117.17 0.00 0.00 4262.77 712.35 4815.47 00:19:04.418 Job: MalCache2 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:19:04.418 MalCache2 : 4.01 29985.56 117.13 0.00 0.00 4262.33 687.42 4616.01 00:19:04.418 =================================================================================================================== 00:19:04.418 Total : 59981.41 234.30 0.00 0.00 4262.55 687.42 4815.47 00:19:04.418 [2024-04-25 20:14:02.009833] MalCache1: Flushing cache 00:19:04.418 [2024-04-25 20:14:02.009866] MalCache1: Flushing cache completed 00:19:04.418 [2024-04-25 20:14:02.011377] MalCache1: Stopping cache 00:19:04.418 [2024-04-25 20:14:02.199008] MalCache1: Done saving cache state! 00:19:04.418 [2024-04-25 20:14:02.214780] Cache MalCache1 successfully stopped 00:19:04.986 20:14:02 -- integrity/bdevperf-iotypes.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w unmap 00:19:04.986 20:14:02 -- integrity/bdevperf-iotypes.sh@14 -- # gen_malloc_ocf_json 00:19:04.986 20:14:02 -- integrity/mallocs.conf@2 -- # local size=300 00:19:04.986 20:14:02 -- integrity/mallocs.conf@3 -- # local block_size=512 00:19:04.986 20:14:02 -- integrity/mallocs.conf@4 -- # local config 00:19:04.986 20:14:02 -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:04.986 { 00:19:04.986 "method": "bdev_malloc_create", 00:19:04.986 "params": { 00:19:04.986 "name": "Malloc$malloc", 00:19:04.986 "num_blocks": $(( (size << 20) / block_size )), 00:19:04.986 "block_size": 512 00:19:04.986 } 00:19:04.986 } 00:19:04.986 JSON 00:19:04.986 )") 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # cat 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:04.986 { 00:19:04.986 "method": "bdev_malloc_create", 00:19:04.986 "params": { 00:19:04.986 "name": "Malloc$malloc", 00:19:04.986 "num_blocks": $(( (size << 20) / block_size )), 00:19:04.986 "block_size": 512 00:19:04.986 } 00:19:04.986 } 00:19:04.986 JSON 00:19:04.986 )") 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # cat 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:04.986 { 00:19:04.986 "method": "bdev_malloc_create", 00:19:04.986 "params": { 00:19:04.986 "name": "Malloc$malloc", 00:19:04.986 "num_blocks": $(( (size << 20) / block_size )), 00:19:04.986 "block_size": 512 00:19:04.986 } 00:19:04.986 } 00:19:04.986 JSON 00:19:04.986 )") 00:19:04.986 20:14:02 -- integrity/mallocs.conf@21 -- # cat 00:19:04.986 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:04.987 20:14:02 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:04.987 20:14:02 -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:19:04.987 20:14:02 -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:19:04.987 20:14:02 -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:19:04.987 20:14:02 -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:19:04.987 20:14:02 -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:19:04.987 20:14:02 -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:19:04.987 20:14:02 -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:19:04.987 20:14:02 -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:19:04.987 20:14:02 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:04.987 20:14:02 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:04.987 { 00:19:04.987 "method": "bdev_ocf_create", 00:19:04.987 "params": { 00:19:04.987 "name": "MalCache$ocf", 00:19:04.987 "mode": "${ocf_mode[ocf]}", 00:19:04.987 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:04.987 "core_bdev_name": "${ocf_core[ocf]}" 00:19:04.987 } 00:19:04.987 } 00:19:04.987 JSON 00:19:04.987 )") 00:19:04.987 20:14:02 -- integrity/mallocs.conf@44 -- # cat 00:19:04.987 20:14:02 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:04.987 20:14:02 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:04.987 { 00:19:04.987 "method": "bdev_ocf_create", 00:19:04.987 "params": { 00:19:04.987 "name": "MalCache$ocf", 00:19:04.987 "mode": "${ocf_mode[ocf]}", 00:19:04.987 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:04.987 "core_bdev_name": "${ocf_core[ocf]}" 00:19:04.987 } 00:19:04.987 } 00:19:04.987 JSON 00:19:04.987 )") 00:19:04.987 20:14:02 -- integrity/mallocs.conf@44 -- # cat 00:19:04.987 20:14:02 -- integrity/mallocs.conf@47 -- # jq . 00:19:04.987 20:14:02 -- integrity/mallocs.conf@47 -- # IFS=, 00:19:04.987 20:14:02 -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:19:04.987 "method": "bdev_malloc_create", 00:19:04.987 "params": { 00:19:04.987 "name": "Malloc0", 00:19:04.987 "num_blocks": 614400, 00:19:04.987 "block_size": 512 00:19:04.987 } 00:19:04.987 },{ 00:19:04.987 "method": "bdev_malloc_create", 00:19:04.987 "params": { 00:19:04.987 "name": "Malloc1", 00:19:04.987 "num_blocks": 614400, 00:19:04.987 "block_size": 512 00:19:04.987 } 00:19:04.987 },{ 00:19:04.987 "method": "bdev_malloc_create", 00:19:04.987 "params": { 00:19:04.987 "name": "Malloc2", 00:19:04.987 "num_blocks": 614400, 00:19:04.987 "block_size": 512 00:19:04.987 } 00:19:04.987 },{ 00:19:04.987 "method": "bdev_ocf_create", 00:19:04.987 "params": { 00:19:04.987 "name": "MalCache1", 00:19:04.987 "mode": "wt", 00:19:04.987 "cache_bdev_name": "Malloc0", 00:19:04.987 "core_bdev_name": "Malloc1" 00:19:04.987 } 00:19:04.987 },{ 00:19:04.987 "method": "bdev_ocf_create", 00:19:04.987 "params": { 00:19:04.987 "name": "MalCache2", 00:19:04.987 "mode": "pt", 00:19:04.987 "cache_bdev_name": "Malloc0", 00:19:04.987 "core_bdev_name": "Malloc2" 00:19:04.987 } 00:19:04.987 }' 00:19:04.987 [2024-04-25 20:14:02.866904] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:04.987 [2024-04-25 20:14:02.866979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2164208 ] 00:19:04.987 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.246 [2024-04-25 20:14:02.973076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.246 [2024-04-25 20:14:03.068337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.504 [2024-04-25 20:14:03.272533] 'OCF_Core' volume operations registered 00:19:05.504 [2024-04-25 20:14:03.276099] 'OCF_Cache' volume operations registered 00:19:05.504 [2024-04-25 20:14:03.280050] 'OCF Composite' volume operations registered 00:19:05.504 [2024-04-25 20:14:03.283572] 'SPDK_block_device' volume operations registered 00:19:05.763 [2024-04-25 20:14:03.522699] Inserting cache MalCache1 00:19:05.763 [2024-04-25 20:14:03.523188] MalCache1: Metadata initialized 00:19:05.763 [2024-04-25 20:14:03.523638] MalCache1: Successfully added 00:19:05.763 [2024-04-25 20:14:03.523653] MalCache1: Cache mode : wt 00:19:05.763 [2024-04-25 20:14:03.534479] MalCache1: Super block config offset : 0 kiB 00:19:05.763 [2024-04-25 20:14:03.534508] MalCache1: Super block config size : 2200 B 00:19:05.763 [2024-04-25 20:14:03.534515] MalCache1: Super block runtime offset : 128 kiB 00:19:05.763 [2024-04-25 20:14:03.534522] MalCache1: Super block runtime size : 4 B 00:19:05.763 [2024-04-25 20:14:03.534529] MalCache1: Reserved offset : 256 kiB 00:19:05.763 [2024-04-25 20:14:03.534536] MalCache1: Reserved size : 128 kiB 00:19:05.763 [2024-04-25 20:14:03.534543] MalCache1: Part config offset : 384 kiB 00:19:05.763 [2024-04-25 20:14:03.534550] MalCache1: Part config size : 48 kiB 00:19:05.763 [2024-04-25 20:14:03.534557] MalCache1: Part runtime offset : 640 kiB 00:19:05.763 [2024-04-25 20:14:03.534564] MalCache1: Part runtime size : 72 kiB 00:19:05.763 [2024-04-25 20:14:03.534570] MalCache1: Core config offset : 768 kiB 00:19:05.763 [2024-04-25 20:14:03.534577] MalCache1: Core config size : 512 kiB 00:19:05.763 [2024-04-25 20:14:03.534583] MalCache1: Core runtime offset : 1792 kiB 00:19:05.763 [2024-04-25 20:14:03.534590] MalCache1: Core runtime size : 1172 kiB 00:19:05.763 [2024-04-25 20:14:03.534597] MalCache1: Core UUID offset : 3072 kiB 00:19:05.763 [2024-04-25 20:14:03.534603] MalCache1: Core UUID size : 16384 kiB 00:19:05.763 [2024-04-25 20:14:03.534610] MalCache1: Cleaning offset : 35840 kiB 00:19:05.763 [2024-04-25 20:14:03.534616] MalCache1: Cleaning size : 788 kiB 00:19:05.763 [2024-04-25 20:14:03.534623] MalCache1: LRU list offset : 36736 kiB 00:19:05.763 [2024-04-25 20:14:03.534630] MalCache1: LRU list size : 592 kiB 00:19:05.763 [2024-04-25 20:14:03.534641] MalCache1: Collision offset : 37376 kiB 00:19:05.763 [2024-04-25 20:14:03.534648] MalCache1: Collision size : 788 kiB 00:19:05.763 [2024-04-25 20:14:03.534655] MalCache1: List info offset : 38272 kiB 00:19:05.763 [2024-04-25 20:14:03.534661] MalCache1: List info size : 592 kiB 00:19:05.763 [2024-04-25 20:14:03.534668] MalCache1: Hash offset : 38912 kiB 00:19:05.763 [2024-04-25 20:14:03.534675] MalCache1: Hash size : 68 kiB 00:19:05.763 [2024-04-25 20:14:03.534683] MalCache1: Cache line size: 4 kiB 00:19:05.763 [2024-04-25 20:14:03.534691] MalCache1: Metadata capacity: 20 MiB 00:19:05.763 [2024-04-25 20:14:03.545130] MalCache1: Policy 'always' initialized successfully 00:19:06.022 [2024-04-25 20:14:03.757133] MalCache1: Done saving cache state! 00:19:06.022 [2024-04-25 20:14:03.788727] MalCache1: Cache attached 00:19:06.022 [2024-04-25 20:14:03.788822] MalCache1: Successfully attached 00:19:06.022 [2024-04-25 20:14:03.789096] MalCache1: Inserting core Malloc1 00:19:06.022 [2024-04-25 20:14:03.789121] MalCache1.Malloc1: Seqential cutoff init 00:19:06.022 [2024-04-25 20:14:03.820372] MalCache1.Malloc1: Successfully added 00:19:06.022 [2024-04-25 20:14:03.826370] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:19:06.022 [2024-04-25 20:14:03.826609] MalCache1: Inserting core Malloc2 00:19:06.022 [2024-04-25 20:14:03.826642] MalCache1.Malloc2: Seqential cutoff init 00:19:06.022 [2024-04-25 20:14:03.858415] MalCache1.Malloc2: Successfully added 00:19:06.022 Running I/O for 4 seconds... 00:19:10.213 00:19:10.213 Latency(us) 00:19:10.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.213 Job: MalCache1 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:19:10.213 MalCache1 : 4.00 23660.52 92.42 0.00 0.00 5413.17 1196.74 4026531.84 00:19:10.213 Job: MalCache2 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:19:10.213 MalCache2 : 4.01 23652.78 92.39 0.00 0.00 5412.99 1025.78 4026531.84 00:19:10.213 =================================================================================================================== 00:19:10.213 Total : 47313.31 184.82 0.00 0.00 5413.08 1025.78 4026531.84 00:19:10.213 [2024-04-25 20:14:07.896546] MalCache1: Flushing cache 00:19:10.213 [2024-04-25 20:14:07.896585] MalCache1: Flushing cache completed 00:19:10.213 [2024-04-25 20:14:07.898086] MalCache1: Stopping cache 00:19:10.213 [2024-04-25 20:14:08.084989] MalCache1: Done saving cache state! 00:19:10.213 [2024-04-25 20:14:08.100708] Cache MalCache1 successfully stopped 00:19:11.149 20:14:08 -- integrity/bdevperf-iotypes.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w write 00:19:11.149 20:14:08 -- integrity/bdevperf-iotypes.sh@15 -- # gen_malloc_ocf_json 00:19:11.149 20:14:08 -- integrity/mallocs.conf@2 -- # local size=300 00:19:11.149 20:14:08 -- integrity/mallocs.conf@3 -- # local block_size=512 00:19:11.149 20:14:08 -- integrity/mallocs.conf@4 -- # local config 00:19:11.149 20:14:08 -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:11.149 { 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc$malloc", 00:19:11.149 "num_blocks": $(( (size << 20) / block_size )), 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 } 00:19:11.149 JSON 00:19:11.149 )") 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # cat 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:11.149 { 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc$malloc", 00:19:11.149 "num_blocks": $(( (size << 20) / block_size )), 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 } 00:19:11.149 JSON 00:19:11.149 )") 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # cat 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:11.149 { 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc$malloc", 00:19:11.149 "num_blocks": $(( (size << 20) / block_size )), 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 } 00:19:11.149 JSON 00:19:11.149 )") 00:19:11.149 20:14:08 -- integrity/mallocs.conf@21 -- # cat 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:19:11.149 20:14:08 -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:19:11.149 20:14:08 -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:19:11.149 20:14:08 -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:19:11.149 20:14:08 -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:19:11.149 20:14:08 -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:19:11.149 20:14:08 -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:19:11.149 20:14:08 -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:19:11.149 20:14:08 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:11.149 20:14:08 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:11.149 { 00:19:11.149 "method": "bdev_ocf_create", 00:19:11.149 "params": { 00:19:11.149 "name": "MalCache$ocf", 00:19:11.149 "mode": "${ocf_mode[ocf]}", 00:19:11.149 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:11.149 "core_bdev_name": "${ocf_core[ocf]}" 00:19:11.149 } 00:19:11.149 } 00:19:11.149 JSON 00:19:11.149 )") 00:19:11.149 20:14:08 -- integrity/mallocs.conf@44 -- # cat 00:19:11.149 20:14:08 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:11.149 20:14:08 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:11.149 { 00:19:11.149 "method": "bdev_ocf_create", 00:19:11.149 "params": { 00:19:11.149 "name": "MalCache$ocf", 00:19:11.149 "mode": "${ocf_mode[ocf]}", 00:19:11.149 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:11.149 "core_bdev_name": "${ocf_core[ocf]}" 00:19:11.149 } 00:19:11.149 } 00:19:11.149 JSON 00:19:11.149 )") 00:19:11.149 20:14:08 -- integrity/mallocs.conf@44 -- # cat 00:19:11.149 20:14:08 -- integrity/mallocs.conf@47 -- # jq . 00:19:11.149 20:14:08 -- integrity/mallocs.conf@47 -- # IFS=, 00:19:11.149 20:14:08 -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc0", 00:19:11.149 "num_blocks": 614400, 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 },{ 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc1", 00:19:11.149 "num_blocks": 614400, 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 },{ 00:19:11.149 "method": "bdev_malloc_create", 00:19:11.149 "params": { 00:19:11.149 "name": "Malloc2", 00:19:11.149 "num_blocks": 614400, 00:19:11.149 "block_size": 512 00:19:11.149 } 00:19:11.149 },{ 00:19:11.149 "method": "bdev_ocf_create", 00:19:11.150 "params": { 00:19:11.150 "name": "MalCache1", 00:19:11.150 "mode": "wt", 00:19:11.150 "cache_bdev_name": "Malloc0", 00:19:11.150 "core_bdev_name": "Malloc1" 00:19:11.150 } 00:19:11.150 },{ 00:19:11.150 "method": "bdev_ocf_create", 00:19:11.150 "params": { 00:19:11.150 "name": "MalCache2", 00:19:11.150 "mode": "pt", 00:19:11.150 "cache_bdev_name": "Malloc0", 00:19:11.150 "core_bdev_name": "Malloc2" 00:19:11.150 } 00:19:11.150 }' 00:19:11.150 [2024-04-25 20:14:08.781889] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:11.150 [2024-04-25 20:14:08.781968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165097 ] 00:19:11.150 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.150 [2024-04-25 20:14:08.888525] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.150 [2024-04-25 20:14:08.990123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.410 [2024-04-25 20:14:09.191439] 'OCF_Core' volume operations registered 00:19:11.410 [2024-04-25 20:14:09.194927] 'OCF_Cache' volume operations registered 00:19:11.410 [2024-04-25 20:14:09.198866] 'OCF Composite' volume operations registered 00:19:11.410 [2024-04-25 20:14:09.202355] 'SPDK_block_device' volume operations registered 00:19:11.669 [2024-04-25 20:14:09.447831] Inserting cache MalCache1 00:19:11.669 [2024-04-25 20:14:09.448304] MalCache1: Metadata initialized 00:19:11.669 [2024-04-25 20:14:09.448754] MalCache1: Successfully added 00:19:11.669 [2024-04-25 20:14:09.448770] MalCache1: Cache mode : wt 00:19:11.669 [2024-04-25 20:14:09.459336] MalCache1: Super block config offset : 0 kiB 00:19:11.669 [2024-04-25 20:14:09.459360] MalCache1: Super block config size : 2200 B 00:19:11.669 [2024-04-25 20:14:09.459367] MalCache1: Super block runtime offset : 128 kiB 00:19:11.669 [2024-04-25 20:14:09.459374] MalCache1: Super block runtime size : 4 B 00:19:11.669 [2024-04-25 20:14:09.459380] MalCache1: Reserved offset : 256 kiB 00:19:11.669 [2024-04-25 20:14:09.459388] MalCache1: Reserved size : 128 kiB 00:19:11.669 [2024-04-25 20:14:09.459394] MalCache1: Part config offset : 384 kiB 00:19:11.669 [2024-04-25 20:14:09.459401] MalCache1: Part config size : 48 kiB 00:19:11.669 [2024-04-25 20:14:09.459407] MalCache1: Part runtime offset : 640 kiB 00:19:11.669 [2024-04-25 20:14:09.459414] MalCache1: Part runtime size : 72 kiB 00:19:11.669 [2024-04-25 20:14:09.459420] MalCache1: Core config offset : 768 kiB 00:19:11.669 [2024-04-25 20:14:09.459427] MalCache1: Core config size : 512 kiB 00:19:11.669 [2024-04-25 20:14:09.459433] MalCache1: Core runtime offset : 1792 kiB 00:19:11.669 [2024-04-25 20:14:09.459439] MalCache1: Core runtime size : 1172 kiB 00:19:11.669 [2024-04-25 20:14:09.459446] MalCache1: Core UUID offset : 3072 kiB 00:19:11.669 [2024-04-25 20:14:09.459452] MalCache1: Core UUID size : 16384 kiB 00:19:11.669 [2024-04-25 20:14:09.459459] MalCache1: Cleaning offset : 35840 kiB 00:19:11.669 [2024-04-25 20:14:09.459465] MalCache1: Cleaning size : 788 kiB 00:19:11.669 [2024-04-25 20:14:09.459472] MalCache1: LRU list offset : 36736 kiB 00:19:11.669 [2024-04-25 20:14:09.459478] MalCache1: LRU list size : 592 kiB 00:19:11.669 [2024-04-25 20:14:09.459485] MalCache1: Collision offset : 37376 kiB 00:19:11.669 [2024-04-25 20:14:09.459491] MalCache1: Collision size : 788 kiB 00:19:11.669 [2024-04-25 20:14:09.459498] MalCache1: List info offset : 38272 kiB 00:19:11.669 [2024-04-25 20:14:09.459504] MalCache1: List info size : 592 kiB 00:19:11.669 [2024-04-25 20:14:09.459511] MalCache1: Hash offset : 38912 kiB 00:19:11.669 [2024-04-25 20:14:09.459517] MalCache1: Hash size : 68 kiB 00:19:11.669 [2024-04-25 20:14:09.459525] MalCache1: Cache line size: 4 kiB 00:19:11.669 [2024-04-25 20:14:09.459532] MalCache1: Metadata capacity: 20 MiB 00:19:11.669 [2024-04-25 20:14:09.469619] MalCache1: Policy 'always' initialized successfully 00:19:11.929 [2024-04-25 20:14:09.681134] MalCache1: Done saving cache state! 00:19:11.929 [2024-04-25 20:14:09.712093] MalCache1: Cache attached 00:19:11.929 [2024-04-25 20:14:09.712189] MalCache1: Successfully attached 00:19:11.929 [2024-04-25 20:14:09.712466] MalCache1: Inserting core Malloc1 00:19:11.929 [2024-04-25 20:14:09.712491] MalCache1.Malloc1: Seqential cutoff init 00:19:11.929 [2024-04-25 20:14:09.743236] MalCache1.Malloc1: Successfully added 00:19:11.929 [2024-04-25 20:14:09.749230] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:19:11.929 [2024-04-25 20:14:09.749476] MalCache1: Inserting core Malloc2 00:19:11.929 [2024-04-25 20:14:09.749500] MalCache1.Malloc2: Seqential cutoff init 00:19:11.929 [2024-04-25 20:14:09.780735] MalCache1.Malloc2: Successfully added 00:19:11.929 Running I/O for 4 seconds... 00:19:16.119 00:19:16.119 Latency(us) 00:19:16.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.119 Job: MalCache1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:16.119 MalCache1 : 4.00 16475.77 64.36 0.00 0.00 7759.30 1424.70 10314.80 00:19:16.119 Job: MalCache2 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:16.119 MalCache2 : 4.01 16476.92 64.36 0.00 0.00 7754.88 1367.71 9972.87 00:19:16.119 =================================================================================================================== 00:19:16.119 Total : 32952.69 128.72 0.00 0.00 7757.09 1367.71 10314.80 00:19:16.119 [2024-04-25 20:14:13.819563] MalCache1: Flushing cache 00:19:16.119 [2024-04-25 20:14:13.819600] MalCache1: Flushing cache completed 00:19:16.119 [2024-04-25 20:14:13.820453] MalCache1: Stopping cache 00:19:16.119 [2024-04-25 20:14:14.008275] MalCache1: Done saving cache state! 00:19:16.119 [2024-04-25 20:14:14.024470] Cache MalCache1 successfully stopped 00:19:17.058 00:19:17.058 real 0m17.821s 00:19:17.058 user 0m16.221s 00:19:17.058 sys 0m1.681s 00:19:17.058 20:14:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.058 20:14:14 -- common/autotest_common.sh@10 -- # set +x 00:19:17.058 ************************************ 00:19:17.058 END TEST ocf_bdevperf_iotypes 00:19:17.058 ************************************ 00:19:17.058 20:14:14 -- ocf/ocf.sh@13 -- # run_test ocf_stats /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh 00:19:17.058 20:14:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:17.058 20:14:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:17.058 20:14:14 -- common/autotest_common.sh@10 -- # set +x 00:19:17.058 ************************************ 00:19:17.058 START TEST ocf_stats 00:19:17.058 ************************************ 00:19:17.058 20:14:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh 00:19:17.058 20:14:14 -- integrity/stats.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:19:17.058 20:14:14 -- integrity/stats.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf 00:19:17.058 20:14:14 -- integrity/stats.sh@14 -- # bdev_perf_pid=2165858 00:19:17.058 20:14:14 -- integrity/stats.sh@15 -- # waitforlisten 2165858 00:19:17.058 20:14:14 -- common/autotest_common.sh@819 -- # '[' -z 2165858 ']' 00:19:17.058 20:14:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.058 20:14:14 -- integrity/stats.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:19:17.058 20:14:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:17.058 20:14:14 -- integrity/stats.sh@13 -- # gen_malloc_ocf_json 00:19:17.058 20:14:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.058 20:14:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:17.058 20:14:14 -- integrity/mallocs.conf@2 -- # local size=300 00:19:17.058 20:14:14 -- common/autotest_common.sh@10 -- # set +x 00:19:17.058 20:14:14 -- integrity/mallocs.conf@3 -- # local block_size=512 00:19:17.058 20:14:14 -- integrity/mallocs.conf@4 -- # local config 00:19:17.058 20:14:14 -- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc = 0 )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:17.058 { 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc$malloc", 00:19:17.058 "num_blocks": $(( (size << 20) / block_size )), 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 } 00:19:17.058 JSON 00:19:17.058 )") 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # cat 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:17.058 { 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc$malloc", 00:19:17.058 "num_blocks": $(( (size << 20) / block_size )), 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 } 00:19:17.058 JSON 00:19:17.058 )") 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # cat 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON 00:19:17.058 { 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc$malloc", 00:19:17.058 "num_blocks": $(( (size << 20) / block_size )), 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 } 00:19:17.058 JSON 00:19:17.058 )") 00:19:17.058 20:14:14 -- integrity/mallocs.conf@21 -- # cat 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc++ )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs )) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core 00:19:17.058 20:14:14 -- integrity/mallocs.conf@25 -- # ocfs=(1 2) 00:19:17.058 20:14:14 -- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt 00:19:17.058 20:14:14 -- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0 00:19:17.058 20:14:14 -- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1 00:19:17.058 20:14:14 -- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt 00:19:17.058 20:14:14 -- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0 00:19:17.058 20:14:14 -- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2 00:19:17.058 20:14:14 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:17.058 20:14:14 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:17.058 { 00:19:17.058 "method": "bdev_ocf_create", 00:19:17.058 "params": { 00:19:17.058 "name": "MalCache$ocf", 00:19:17.058 "mode": "${ocf_mode[ocf]}", 00:19:17.058 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:17.058 "core_bdev_name": "${ocf_core[ocf]}" 00:19:17.058 } 00:19:17.058 } 00:19:17.058 JSON 00:19:17.058 )") 00:19:17.058 20:14:14 -- integrity/mallocs.conf@44 -- # cat 00:19:17.058 20:14:14 -- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}" 00:19:17.058 20:14:14 -- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON 00:19:17.058 { 00:19:17.058 "method": "bdev_ocf_create", 00:19:17.058 "params": { 00:19:17.058 "name": "MalCache$ocf", 00:19:17.058 "mode": "${ocf_mode[ocf]}", 00:19:17.058 "cache_bdev_name": "${ocf_cache[ocf]}", 00:19:17.058 "core_bdev_name": "${ocf_core[ocf]}" 00:19:17.058 } 00:19:17.058 } 00:19:17.058 JSON 00:19:17.058 )") 00:19:17.058 20:14:14 -- integrity/mallocs.conf@44 -- # cat 00:19:17.058 20:14:14 -- integrity/mallocs.conf@47 -- # jq . 00:19:17.058 20:14:14 -- integrity/mallocs.conf@47 -- # IFS=, 00:19:17.058 20:14:14 -- integrity/mallocs.conf@47 -- # printf '%s\n' '{ 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc0", 00:19:17.058 "num_blocks": 614400, 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 },{ 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc1", 00:19:17.058 "num_blocks": 614400, 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 },{ 00:19:17.058 "method": "bdev_malloc_create", 00:19:17.058 "params": { 00:19:17.058 "name": "Malloc2", 00:19:17.058 "num_blocks": 614400, 00:19:17.058 "block_size": 512 00:19:17.058 } 00:19:17.058 },{ 00:19:17.058 "method": "bdev_ocf_create", 00:19:17.058 "params": { 00:19:17.058 "name": "MalCache1", 00:19:17.058 "mode": "wt", 00:19:17.058 "cache_bdev_name": "Malloc0", 00:19:17.058 "core_bdev_name": "Malloc1" 00:19:17.058 } 00:19:17.058 },{ 00:19:17.058 "method": "bdev_ocf_create", 00:19:17.058 "params": { 00:19:17.058 "name": "MalCache2", 00:19:17.058 "mode": "pt", 00:19:17.058 "cache_bdev_name": "Malloc0", 00:19:17.058 "core_bdev_name": "Malloc2" 00:19:17.058 } 00:19:17.058 }' 00:19:17.058 [2024-04-25 20:14:14.883949] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:17.058 [2024-04-25 20:14:14.884031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165858 ] 00:19:17.058 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.357 [2024-04-25 20:14:14.992118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.357 [2024-04-25 20:14:15.088813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.357 [2024-04-25 20:14:15.270503] 'OCF_Core' volume operations registered 00:19:17.357 [2024-04-25 20:14:15.273762] 'OCF_Cache' volume operations registered 00:19:17.617 [2024-04-25 20:14:15.277377] 'OCF Composite' volume operations registered 00:19:17.617 [2024-04-25 20:14:15.280584] 'SPDK_block_device' volume operations registered 00:19:17.617 [2024-04-25 20:14:15.496377] Inserting cache MalCache1 00:19:17.617 [2024-04-25 20:14:15.496813] MalCache1: Metadata initialized 00:19:17.617 [2024-04-25 20:14:15.497257] MalCache1: Successfully added 00:19:17.617 [2024-04-25 20:14:15.497271] MalCache1: Cache mode : wt 00:19:17.617 [2024-04-25 20:14:15.507255] MalCache1: Super block config offset : 0 kiB 00:19:17.617 [2024-04-25 20:14:15.507278] MalCache1: Super block config size : 2200 B 00:19:17.617 [2024-04-25 20:14:15.507286] MalCache1: Super block runtime offset : 128 kiB 00:19:17.617 [2024-04-25 20:14:15.507293] MalCache1: Super block runtime size : 4 B 00:19:17.617 [2024-04-25 20:14:15.507300] MalCache1: Reserved offset : 256 kiB 00:19:17.617 [2024-04-25 20:14:15.507306] MalCache1: Reserved size : 128 kiB 00:19:17.617 [2024-04-25 20:14:15.507313] MalCache1: Part config offset : 384 kiB 00:19:17.617 [2024-04-25 20:14:15.507319] MalCache1: Part config size : 48 kiB 00:19:17.617 [2024-04-25 20:14:15.507326] MalCache1: Part runtime offset : 640 kiB 00:19:17.617 [2024-04-25 20:14:15.507333] MalCache1: Part runtime size : 72 kiB 00:19:17.617 [2024-04-25 20:14:15.507339] MalCache1: Core config offset : 768 kiB 00:19:17.617 [2024-04-25 20:14:15.507351] MalCache1: Core config size : 512 kiB 00:19:17.617 [2024-04-25 20:14:15.507358] MalCache1: Core runtime offset : 1792 kiB 00:19:17.617 [2024-04-25 20:14:15.507364] MalCache1: Core runtime size : 1172 kiB 00:19:17.617 [2024-04-25 20:14:15.507371] MalCache1: Core UUID offset : 3072 kiB 00:19:17.617 [2024-04-25 20:14:15.507377] MalCache1: Core UUID size : 16384 kiB 00:19:17.617 [2024-04-25 20:14:15.507384] MalCache1: Cleaning offset : 35840 kiB 00:19:17.617 [2024-04-25 20:14:15.507390] MalCache1: Cleaning size : 788 kiB 00:19:17.617 [2024-04-25 20:14:15.507397] MalCache1: LRU list offset : 36736 kiB 00:19:17.617 [2024-04-25 20:14:15.507403] MalCache1: LRU list size : 592 kiB 00:19:17.617 [2024-04-25 20:14:15.507410] MalCache1: Collision offset : 37376 kiB 00:19:17.617 [2024-04-25 20:14:15.507416] MalCache1: Collision size : 788 kiB 00:19:17.617 [2024-04-25 20:14:15.507422] MalCache1: List info offset : 38272 kiB 00:19:17.617 [2024-04-25 20:14:15.507429] MalCache1: List info size : 592 kiB 00:19:17.617 [2024-04-25 20:14:15.507436] MalCache1: Hash offset : 38912 kiB 00:19:17.617 [2024-04-25 20:14:15.507442] MalCache1: Hash size : 68 kiB 00:19:17.617 [2024-04-25 20:14:15.507450] MalCache1: Cache line size: 4 kiB 00:19:17.617 [2024-04-25 20:14:15.507458] MalCache1: Metadata capacity: 20 MiB 00:19:17.617 [2024-04-25 20:14:15.517062] MalCache1: Policy 'always' initialized successfully 00:19:17.876 [2024-04-25 20:14:15.728358] MalCache1: Done saving cache state! 00:19:17.876 [2024-04-25 20:14:15.759340] MalCache1: Cache attached 00:19:17.876 [2024-04-25 20:14:15.759436] MalCache1: Successfully attached 00:19:17.876 [2024-04-25 20:14:15.759727] MalCache1: Inserting core Malloc1 00:19:17.876 [2024-04-25 20:14:15.759755] MalCache1.Malloc1: Seqential cutoff init 00:19:17.876 [2024-04-25 20:14:15.790619] MalCache1.Malloc1: Successfully added 00:19:17.876 [2024-04-25 20:14:15.796430] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0 00:19:17.876 [2024-04-25 20:14:15.796693] MalCache1: Inserting core Malloc2 00:19:17.876 [2024-04-25 20:14:15.796719] MalCache1.Malloc2: Seqential cutoff init 00:19:18.134 [2024-04-25 20:14:15.828055] MalCache1.Malloc2: Successfully added 00:19:18.134 Running I/O for 120 seconds... 00:19:18.702 20:14:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:18.702 20:14:16 -- common/autotest_common.sh@852 -- # return 0 00:19:18.702 20:14:16 -- integrity/stats.sh@16 -- # sleep 1 00:19:19.641 20:14:17 -- integrity/stats.sh@17 -- # rpc_cmd bdev_ocf_get_stats MalCache1 00:19:19.641 20:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.641 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.641 { 00:19:19.641 "usage": { 00:19:19.641 "occupancy": { 00:19:19.641 "count": 21632, 00:19:19.641 "percentage": "32.26", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "free": { 00:19:19.641 "count": 23776, 00:19:19.641 "percentage": "35.46", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "clean": { 00:19:19.641 "count": 21632, 00:19:19.641 "percentage": "100.0", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "dirty": { 00:19:19.641 "count": 0, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 } 00:19:19.641 }, 00:19:19.641 "requests": { 00:19:19.641 "rd_hits": { 00:19:19.641 "count": 2, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "rd_partial_misses": { 00:19:19.641 "count": 1, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "rd_full_misses": { 00:19:19.641 "count": 1, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "rd_total": { 00:19:19.641 "count": 4, 00:19:19.641 "percentage": "0.1", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "wr_hits": { 00:19:19.641 "count": 8, 00:19:19.641 "percentage": "0.3", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "wr_partial_misses": { 00:19:19.641 "count": 0, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "wr_full_misses": { 00:19:19.641 "count": 21624, 00:19:19.641 "percentage": "99.94", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "wr_total": { 00:19:19.641 "count": 21632, 00:19:19.641 "percentage": "99.98", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "rd_pt": { 00:19:19.641 "count": 0, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "wr_pt": { 00:19:19.641 "count": 0, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "serviced": { 00:19:19.641 "count": 21636, 00:19:19.641 "percentage": "100.0", 00:19:19.641 "units": "Requests" 00:19:19.641 }, 00:19:19.641 "total": { 00:19:19.641 "count": 21636, 00:19:19.641 "percentage": "100.0", 00:19:19.641 "units": "Requests" 00:19:19.641 } 00:19:19.641 }, 00:19:19.641 "blocks": { 00:19:19.641 "core_volume_rd": { 00:19:19.641 "count": 9, 00:19:19.641 "percentage": "0.4", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "core_volume_wr": { 00:19:19.641 "count": 21632, 00:19:19.641 "percentage": "99.95", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "core_volume_total": { 00:19:19.641 "count": 21641, 00:19:19.641 "percentage": "100.0", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "cache_volume_rd": { 00:19:19.641 "count": 2, 00:19:19.641 "percentage": "0.0", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "cache_volume_wr": { 00:19:19.641 "count": 21641, 00:19:19.641 "percentage": "99.99", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "cache_volume_total": { 00:19:19.641 "count": 21643, 00:19:19.641 "percentage": "100.0", 00:19:19.641 "units": "4KiB blocks" 00:19:19.641 }, 00:19:19.641 "volume_rd": { 00:19:19.641 "count": 11, 00:19:19.641 "percentage": "0.5", 00:19:19.641 "units": "4KiB blocks" 00:19:19.642 }, 00:19:19.642 "volume_wr": { 00:19:19.642 "count": 21632, 00:19:19.642 "percentage": "99.94", 00:19:19.642 "units": "4KiB blocks" 00:19:19.642 }, 00:19:19.642 "volume_total": { 00:19:19.642 "count": 21643, 00:19:19.642 "percentage": "100.0", 00:19:19.642 "units": "4KiB blocks" 00:19:19.642 } 00:19:19.642 }, 00:19:19.642 "errors": { 00:19:19.642 "core_volume_rd": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "core_volume_wr": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "core_volume_total": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "cache_volume_rd": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "cache_volume_wr": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "cache_volume_total": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 }, 00:19:19.642 "total": { 00:19:19.642 "count": 0, 00:19:19.642 "percentage": "0.0", 00:19:19.642 "units": "Requests" 00:19:19.642 } 00:19:19.642 } 00:19:19.642 } 00:19:19.642 20:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.642 20:14:17 -- integrity/stats.sh@18 -- # kill -9 2165858 00:19:19.642 20:14:17 -- integrity/stats.sh@19 -- # wait 2165858 00:19:19.642 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh: line 19: 2165858 Killed $bdevperf --json <(gen_malloc_ocf_json) -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:19:19.642 20:14:17 -- integrity/stats.sh@19 -- # true 00:19:19.642 00:19:19.642 real 0m2.787s 00:19:19.642 user 0m2.760s 00:19:19.642 sys 0m0.651s 00:19:19.642 20:14:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.642 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.642 ************************************ 00:19:19.642 END TEST ocf_stats 00:19:19.642 ************************************ 00:19:19.901 20:14:17 -- ocf/ocf.sh@14 -- # run_test ocf_flush /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh 00:19:19.901 20:14:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:19.901 20:14:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.901 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.901 ************************************ 00:19:19.901 START TEST ocf_flush 00:19:19.901 ************************************ 00:19:19.901 20:14:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh 00:19:19.901 20:14:17 -- integrity/flush.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf 00:19:19.901 20:14:17 -- integrity/flush.sh@11 -- # rpc_py='/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock' 00:19:19.901 20:14:17 -- integrity/flush.sh@73 -- # bdevperf_pid=2166274 00:19:19.901 20:14:17 -- integrity/flush.sh@74 -- # trap 'killprocess $bdevperf_pid' SIGINT SIGTERM EXIT 00:19:19.901 20:14:17 -- integrity/flush.sh@75 -- # waitforlisten 2166274 00:19:19.901 20:14:17 -- common/autotest_common.sh@819 -- # '[' -z 2166274 ']' 00:19:19.901 20:14:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.901 20:14:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:19.901 20:14:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.901 20:14:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:19.901 20:14:17 -- common/autotest_common.sh@10 -- # set +x 00:19:19.901 20:14:17 -- integrity/flush.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock 00:19:19.901 20:14:17 -- integrity/flush.sh@72 -- # bdevperf_config 00:19:19.901 20:14:17 -- integrity/flush.sh@19 -- # local config 00:19:19.901 20:14:17 -- integrity/flush.sh@50 -- # cat 00:19:19.901 20:14:17 -- integrity/flush.sh@50 -- # config='{ 00:19:19.901 "method": "bdev_malloc_create", 00:19:19.901 "params": { 00:19:19.901 "name": "Malloc0", 00:19:19.901 "num_blocks": 102400, 00:19:19.901 "block_size": 512 00:19:19.901 } 00:19:19.901 }, 00:19:19.901 { 00:19:19.901 "method": "bdev_malloc_create", 00:19:19.901 "params": { 00:19:19.901 "name": "Malloc1", 00:19:19.901 "num_blocks": 1024000, 00:19:19.901 "block_size": 512 00:19:19.901 } 00:19:19.901 }, 00:19:19.901 { 00:19:19.901 "method": "bdev_ocf_create", 00:19:19.901 "params": { 00:19:19.901 "name": "MalCache0", 00:19:19.901 "mode": "wb", 00:19:19.901 "cache_line_size": 4, 00:19:19.901 "cache_bdev_name": "Malloc0", 00:19:19.901 "core_bdev_name": "Malloc1" 00:19:19.901 } 00:19:19.901 }' 00:19:19.901 20:14:17 -- integrity/flush.sh@52 -- # jq . 00:19:19.901 20:14:17 -- integrity/flush.sh@53 -- # IFS=, 00:19:19.901 20:14:17 -- integrity/flush.sh@54 -- # printf '%s\n' '{ 00:19:19.901 "method": "bdev_malloc_create", 00:19:19.901 "params": { 00:19:19.901 "name": "Malloc0", 00:19:19.901 "num_blocks": 102400, 00:19:19.901 "block_size": 512 00:19:19.901 } 00:19:19.901 }, 00:19:19.901 { 00:19:19.901 "method": "bdev_malloc_create", 00:19:19.901 "params": { 00:19:19.901 "name": "Malloc1", 00:19:19.901 "num_blocks": 1024000, 00:19:19.901 "block_size": 512 00:19:19.901 } 00:19:19.901 }, 00:19:19.901 { 00:19:19.901 "method": "bdev_ocf_create", 00:19:19.901 "params": { 00:19:19.901 "name": "MalCache0", 00:19:19.901 "mode": "wb", 00:19:19.901 "cache_line_size": 4, 00:19:19.901 "cache_bdev_name": "Malloc0", 00:19:19.901 "core_bdev_name": "Malloc1" 00:19:19.901 } 00:19:19.901 }' 00:19:19.901 [2024-04-25 20:14:17.690126] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:19.901 [2024-04-25 20:14:17.690196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166274 ] 00:19:19.901 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.901 [2024-04-25 20:14:17.784429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.160 [2024-04-25 20:14:17.881154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.160 [2024-04-25 20:14:18.081691] 'OCF_Core' volume operations registered 00:19:20.160 [2024-04-25 20:14:18.085163] 'OCF_Cache' volume operations registered 00:19:20.160 [2024-04-25 20:14:18.089121] 'OCF Composite' volume operations registered 00:19:20.160 [2024-04-25 20:14:18.092614] 'SPDK_block_device' volume operations registered 00:19:20.419 [2024-04-25 20:14:18.255382] Inserting cache MalCache0 00:19:20.419 [2024-04-25 20:14:18.255888] MalCache0: Metadata initialized 00:19:20.419 [2024-04-25 20:14:18.256331] MalCache0: Successfully added 00:19:20.419 [2024-04-25 20:14:18.256345] MalCache0: Cache mode : wb 00:19:20.419 [2024-04-25 20:14:18.266889] MalCache0: Super block config offset : 0 kiB 00:19:20.419 [2024-04-25 20:14:18.266912] MalCache0: Super block config size : 2200 B 00:19:20.419 [2024-04-25 20:14:18.266919] MalCache0: Super block runtime offset : 128 kiB 00:19:20.419 [2024-04-25 20:14:18.266926] MalCache0: Super block runtime size : 4 B 00:19:20.419 [2024-04-25 20:14:18.266933] MalCache0: Reserved offset : 256 kiB 00:19:20.419 [2024-04-25 20:14:18.266940] MalCache0: Reserved size : 128 kiB 00:19:20.419 [2024-04-25 20:14:18.266946] MalCache0: Part config offset : 384 kiB 00:19:20.419 [2024-04-25 20:14:18.266953] MalCache0: Part config size : 48 kiB 00:19:20.419 [2024-04-25 20:14:18.266959] MalCache0: Part runtime offset : 640 kiB 00:19:20.419 [2024-04-25 20:14:18.266966] MalCache0: Part runtime size : 72 kiB 00:19:20.419 [2024-04-25 20:14:18.266972] MalCache0: Core config offset : 768 kiB 00:19:20.419 [2024-04-25 20:14:18.266979] MalCache0: Core config size : 512 kiB 00:19:20.419 [2024-04-25 20:14:18.266985] MalCache0: Core runtime offset : 1792 kiB 00:19:20.419 [2024-04-25 20:14:18.266992] MalCache0: Core runtime size : 1172 kiB 00:19:20.419 [2024-04-25 20:14:18.266998] MalCache0: Core UUID offset : 3072 kiB 00:19:20.419 [2024-04-25 20:14:18.267005] MalCache0: Core UUID size : 16384 kiB 00:19:20.419 [2024-04-25 20:14:18.267011] MalCache0: Cleaning offset : 35840 kiB 00:19:20.419 [2024-04-25 20:14:18.267018] MalCache0: Cleaning size : 44 kiB 00:19:20.419 [2024-04-25 20:14:18.267024] MalCache0: LRU list offset : 35968 kiB 00:19:20.419 [2024-04-25 20:14:18.267031] MalCache0: LRU list size : 36 kiB 00:19:20.419 [2024-04-25 20:14:18.267037] MalCache0: Collision offset : 36096 kiB 00:19:20.419 [2024-04-25 20:14:18.267044] MalCache0: Collision size : 44 kiB 00:19:20.419 [2024-04-25 20:14:18.267050] MalCache0: List info offset : 36224 kiB 00:19:20.419 [2024-04-25 20:14:18.267057] MalCache0: List info size : 36 kiB 00:19:20.419 [2024-04-25 20:14:18.267063] MalCache0: Hash offset : 36352 kiB 00:19:20.419 [2024-04-25 20:14:18.267070] MalCache0: Hash size : 4 kiB 00:19:20.419 [2024-04-25 20:14:18.267077] MalCache0: Cache line size: 4 kiB 00:19:20.419 [2024-04-25 20:14:18.267085] MalCache0: Metadata capacity: 18 MiB 00:19:20.419 [2024-04-25 20:14:18.277401] MalCache0: Policy 'always' initialized successfully 00:19:20.678 [2024-04-25 20:14:18.366565] MalCache0: Done saving cache state! 00:19:20.678 [2024-04-25 20:14:18.399427] MalCache0: Cache attached 00:19:20.678 [2024-04-25 20:14:18.399523] MalCache0: Successfully attached 00:19:20.678 [2024-04-25 20:14:18.399818] MalCache0: Inserting core Malloc1 00:19:20.678 [2024-04-25 20:14:18.399841] MalCache0.Malloc1: Seqential cutoff init 00:19:20.678 [2024-04-25 20:14:18.432978] MalCache0.Malloc1: Successfully added 00:19:20.678 Running I/O for 120 seconds... 00:19:20.678 20:14:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:20.678 20:14:18 -- common/autotest_common.sh@852 -- # return 0 00:19:20.678 20:14:18 -- integrity/flush.sh@76 -- # sleep 5 00:19:25.947 20:14:23 -- integrity/flush.sh@78 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_start MalCache0 00:19:25.947 [2024-04-25 20:14:23.669398] MalCache0: Flushing cache 00:19:25.947 20:14:23 -- integrity/flush.sh@79 -- # sleep 1 00:19:25.947 [2024-04-25 20:14:23.776997] MalCache0: Flushing cache completed 00:19:26.886 20:14:24 -- integrity/flush.sh@81 -- # check_flush_in_progress 00:19:26.887 20:14:24 -- integrity/flush.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0 00:19:26.887 20:14:24 -- integrity/flush.sh@15 -- # jq -e .in_progress 00:19:27.145 20:14:24 -- integrity/flush.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0 00:19:27.145 20:14:24 -- integrity/flush.sh@84 -- # jq -e '.status == 0' 00:19:27.145 true 00:19:27.145 20:14:25 -- integrity/flush.sh@1 -- # killprocess 2166274 00:19:27.145 20:14:25 -- common/autotest_common.sh@926 -- # '[' -z 2166274 ']' 00:19:27.145 20:14:25 -- common/autotest_common.sh@930 -- # kill -0 2166274 00:19:27.145 20:14:25 -- common/autotest_common.sh@931 -- # uname 00:19:27.145 20:14:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:27.145 20:14:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2166274 00:19:27.145 20:14:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:27.145 20:14:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:27.145 20:14:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2166274' 00:19:27.145 killing process with pid 2166274 00:19:27.145 20:14:25 -- common/autotest_common.sh@945 -- # kill 2166274 00:19:27.145 20:14:25 -- common/autotest_common.sh@950 -- # wait 2166274 00:19:27.145 Received shutdown signal, test time was about 6.605567 seconds 00:19:27.145 00:19:27.145 Latency(us) 00:19:27.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.145 Job: MalCache0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:27.145 MalCache0 : 6.60 40580.48 158.52 0.00 0.00 3150.05 146.92 94371.84 00:19:27.145 =================================================================================================================== 00:19:27.145 Total : 40580.48 158.52 0.00 0.00 3150.05 146.92 94371.84 00:19:27.145 [2024-04-25 20:14:25.070320] MalCache0: Flushing cache 00:19:27.403 [2024-04-25 20:14:25.158888] MalCache0: Flushing cache completed 00:19:27.403 [2024-04-25 20:14:25.158958] MalCache0: Stopping cache 00:19:27.403 [2024-04-25 20:14:25.246113] MalCache0: Done saving cache state! 00:19:27.403 [2024-04-25 20:14:25.264443] Cache MalCache0 successfully stopped 00:19:27.969 00:19:27.969 real 0m8.279s 00:19:27.969 user 0m8.464s 00:19:27.969 sys 0m0.679s 00:19:27.969 20:14:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:27.969 20:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:27.969 ************************************ 00:19:27.969 END TEST ocf_flush 00:19:27.969 ************************************ 00:19:28.228 20:14:25 -- ocf/ocf.sh@15 -- # run_test ocf_create_destruct /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh 00:19:28.228 20:14:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:28.228 20:14:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:28.228 20:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:28.228 ************************************ 00:19:28.228 START TEST ocf_create_destruct 00:19:28.228 ************************************ 00:19:28.228 20:14:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh 00:19:28.228 20:14:25 -- management/create-destruct.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:28.228 20:14:25 -- management/create-destruct.sh@21 -- # spdk_pid=2167386 00:19:28.228 20:14:25 -- management/create-destruct.sh@23 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:28.228 20:14:25 -- management/create-destruct.sh@25 -- # waitforlisten 2167386 00:19:28.228 20:14:25 -- management/create-destruct.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:19:28.228 20:14:25 -- common/autotest_common.sh@819 -- # '[' -z 2167386 ']' 00:19:28.228 20:14:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.228 20:14:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.228 20:14:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.228 20:14:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.228 20:14:25 -- common/autotest_common.sh@10 -- # set +x 00:19:28.228 [2024-04-25 20:14:26.049525] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:28.228 [2024-04-25 20:14:26.049612] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167386 ] 00:19:28.228 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.228 [2024-04-25 20:14:26.154145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.487 [2024-04-25 20:14:26.256271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.745 [2024-04-25 20:14:26.451341] 'OCF_Core' volume operations registered 00:19:28.745 [2024-04-25 20:14:26.454843] 'OCF_Cache' volume operations registered 00:19:28.745 [2024-04-25 20:14:26.458793] 'OCF Composite' volume operations registered 00:19:28.745 [2024-04-25 20:14:26.462284] 'SPDK_block_device' volume operations registered 00:19:29.312 20:14:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:29.312 20:14:26 -- common/autotest_common.sh@852 -- # return 0 00:19:29.312 20:14:26 -- management/create-destruct.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:19:29.312 Malloc0 00:19:29.312 20:14:27 -- management/create-destruct.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:19:29.571 Malloc1 00:19:29.571 20:14:27 -- management/create-destruct.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt Malloc0 NonExisting 00:19:29.830 [2024-04-25 20:14:27.633236] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for core device 'NonExisting' to connect 00:19:29.830 PartCache 00:19:29.830 20:14:27 -- management/create-destruct.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs PartCache 00:19:29.830 20:14:27 -- management/create-destruct.sh@32 -- # jq -e '.[0] | .started == false and .cache.attached and .core.attached == false' 00:19:30.089 true 00:19:30.089 20:14:27 -- management/create-destruct.sh@35 -- # jq -e '.[0] | .name == "PartCache"' 00:19:30.089 20:14:27 -- management/create-destruct.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs NonExisting 00:19:30.348 true 00:19:30.348 20:14:28 -- management/create-destruct.sh@38 -- # bdev_check_claimed Malloc0 00:19:30.348 20:14:28 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:19:30.348 20:14:28 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:30.607 20:14:28 -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:19:30.607 20:14:28 -- management/create-destruct.sh@14 -- # return 0 00:19:30.607 20:14:28 -- management/create-destruct.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete PartCache 00:19:30.866 20:14:28 -- management/create-destruct.sh@44 -- # bdev_check_claimed Malloc0 00:19:30.866 20:14:28 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:19:30.866 20:14:28 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:31.126 20:14:28 -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:19:31.126 20:14:28 -- management/create-destruct.sh@16 -- # return 1 00:19:31.126 20:14:28 -- management/create-destruct.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create FullCache wt Malloc0 Malloc1 00:19:31.386 [2024-04-25 20:14:29.074583] Inserting cache FullCache 00:19:31.386 [2024-04-25 20:14:29.075077] FullCache: Metadata initialized 00:19:31.386 [2024-04-25 20:14:29.075518] FullCache: Successfully added 00:19:31.386 [2024-04-25 20:14:29.075532] FullCache: Cache mode : wt 00:19:31.386 [2024-04-25 20:14:29.086326] FullCache: Super block config offset : 0 kiB 00:19:31.386 [2024-04-25 20:14:29.086348] FullCache: Super block config size : 2200 B 00:19:31.386 [2024-04-25 20:14:29.086355] FullCache: Super block runtime offset : 128 kiB 00:19:31.386 [2024-04-25 20:14:29.086362] FullCache: Super block runtime size : 4 B 00:19:31.386 [2024-04-25 20:14:29.086369] FullCache: Reserved offset : 256 kiB 00:19:31.386 [2024-04-25 20:14:29.086382] FullCache: Reserved size : 128 kiB 00:19:31.386 [2024-04-25 20:14:29.086389] FullCache: Part config offset : 384 kiB 00:19:31.386 [2024-04-25 20:14:29.086395] FullCache: Part config size : 48 kiB 00:19:31.386 [2024-04-25 20:14:29.086402] FullCache: Part runtime offset : 640 kiB 00:19:31.386 [2024-04-25 20:14:29.086408] FullCache: Part runtime size : 72 kiB 00:19:31.386 [2024-04-25 20:14:29.086414] FullCache: Core config offset : 768 kiB 00:19:31.386 [2024-04-25 20:14:29.086421] FullCache: Core config size : 512 kiB 00:19:31.386 [2024-04-25 20:14:29.086427] FullCache: Core runtime offset : 1792 kiB 00:19:31.386 [2024-04-25 20:14:29.086433] FullCache: Core runtime size : 1172 kiB 00:19:31.386 [2024-04-25 20:14:29.086440] FullCache: Core UUID offset : 3072 kiB 00:19:31.386 [2024-04-25 20:14:29.086446] FullCache: Core UUID size : 16384 kiB 00:19:31.386 [2024-04-25 20:14:29.086452] FullCache: Cleaning offset : 35840 kiB 00:19:31.386 [2024-04-25 20:14:29.086459] FullCache: Cleaning size : 196 kiB 00:19:31.386 [2024-04-25 20:14:29.086465] FullCache: LRU list offset : 36096 kiB 00:19:31.386 [2024-04-25 20:14:29.086471] FullCache: LRU list size : 148 kiB 00:19:31.386 [2024-04-25 20:14:29.086478] FullCache: Collision offset : 36352 kiB 00:19:31.386 [2024-04-25 20:14:29.086484] FullCache: Collision size : 196 kiB 00:19:31.386 [2024-04-25 20:14:29.086490] FullCache: List info offset : 36608 kiB 00:19:31.386 [2024-04-25 20:14:29.086497] FullCache: List info size : 148 kiB 00:19:31.386 [2024-04-25 20:14:29.086503] FullCache: Hash offset : 36864 kiB 00:19:31.386 [2024-04-25 20:14:29.086509] FullCache: Hash size : 20 kiB 00:19:31.386 [2024-04-25 20:14:29.086517] FullCache: Cache line size: 4 kiB 00:19:31.386 [2024-04-25 20:14:29.086525] FullCache: Metadata capacity: 18 MiB 00:19:31.386 [2024-04-25 20:14:29.096883] FullCache: Policy 'always' initialized successfully 00:19:31.386 [2024-04-25 20:14:29.211017] FullCache: Done saving cache state! 00:19:31.386 [2024-04-25 20:14:29.243380] FullCache: Cache attached 00:19:31.386 [2024-04-25 20:14:29.243477] FullCache: Successfully attached 00:19:31.386 [2024-04-25 20:14:29.243754] FullCache: Inserting core Malloc1 00:19:31.386 [2024-04-25 20:14:29.243789] FullCache.Malloc1: Seqential cutoff init 00:19:31.386 [2024-04-25 20:14:29.275237] FullCache.Malloc1: Successfully added 00:19:31.386 FullCache 00:19:31.386 20:14:29 -- management/create-destruct.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs FullCache 00:19:31.386 20:14:29 -- management/create-destruct.sh@51 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:19:31.646 true 00:19:31.646 20:14:29 -- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc0 00:19:31.646 20:14:29 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:19:31.646 20:14:29 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:31.905 20:14:29 -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:19:31.905 20:14:29 -- management/create-destruct.sh@14 -- # return 0 00:19:31.905 20:14:29 -- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc1 00:19:31.905 20:14:29 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:19:31.905 20:14:29 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:32.165 20:14:30 -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:19:32.165 20:14:30 -- management/create-destruct.sh@14 -- # return 0 00:19:32.165 20:14:30 -- management/create-destruct.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete FullCache 00:19:32.425 [2024-04-25 20:14:30.231985] FullCache: Flushing cache 00:19:32.425 [2024-04-25 20:14:30.232024] FullCache: Flushing cache completed 00:19:32.425 [2024-04-25 20:14:30.233034] FullCache.Malloc1: Removing core 00:19:32.425 [2024-04-25 20:14:30.265278] FullCache: Core Malloc1 successfully removed 00:19:32.425 [2024-04-25 20:14:30.265331] FullCache: Stopping cache 00:19:32.684 [2024-04-25 20:14:30.372316] FullCache: Done saving cache state! 00:19:32.684 [2024-04-25 20:14:30.389152] Cache FullCache successfully stopped 00:19:32.684 20:14:30 -- management/create-destruct.sh@60 -- # bdev_check_claimed Malloc0 00:19:32.684 20:14:30 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:19:32.684 20:14:30 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:32.944 20:14:30 -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:19:32.944 20:14:30 -- management/create-destruct.sh@16 -- # return 1 00:19:32.944 20:14:30 -- management/create-destruct.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create HotCache wt Malloc0 Malloc1 00:19:32.944 [2024-04-25 20:14:30.863736] Inserting cache HotCache 00:19:32.944 [2024-04-25 20:14:30.864207] HotCache: Metadata initialized 00:19:32.944 [2024-04-25 20:14:30.864629] HotCache: Successfully added 00:19:32.944 [2024-04-25 20:14:30.864644] HotCache: Cache mode : wt 00:19:32.944 [2024-04-25 20:14:30.875366] HotCache: Super block config offset : 0 kiB 00:19:32.944 [2024-04-25 20:14:30.875387] HotCache: Super block config size : 2200 B 00:19:32.944 [2024-04-25 20:14:30.875394] HotCache: Super block runtime offset : 128 kiB 00:19:32.945 [2024-04-25 20:14:30.875400] HotCache: Super block runtime size : 4 B 00:19:32.945 [2024-04-25 20:14:30.875407] HotCache: Reserved offset : 256 kiB 00:19:32.945 [2024-04-25 20:14:30.875413] HotCache: Reserved size : 128 kiB 00:19:32.945 [2024-04-25 20:14:30.875420] HotCache: Part config offset : 384 kiB 00:19:32.945 [2024-04-25 20:14:30.875426] HotCache: Part config size : 48 kiB 00:19:32.945 [2024-04-25 20:14:30.875433] HotCache: Part runtime offset : 640 kiB 00:19:32.945 [2024-04-25 20:14:30.875439] HotCache: Part runtime size : 72 kiB 00:19:32.945 [2024-04-25 20:14:30.875445] HotCache: Core config offset : 768 kiB 00:19:32.945 [2024-04-25 20:14:30.875452] HotCache: Core config size : 512 kiB 00:19:32.945 [2024-04-25 20:14:30.875458] HotCache: Core runtime offset : 1792 kiB 00:19:32.945 [2024-04-25 20:14:30.875464] HotCache: Core runtime size : 1172 kiB 00:19:32.945 [2024-04-25 20:14:30.875471] HotCache: Core UUID offset : 3072 kiB 00:19:32.945 [2024-04-25 20:14:30.875477] HotCache: Core UUID size : 16384 kiB 00:19:32.945 [2024-04-25 20:14:30.875484] HotCache: Cleaning offset : 35840 kiB 00:19:32.945 [2024-04-25 20:14:30.875490] HotCache: Cleaning size : 196 kiB 00:19:32.945 [2024-04-25 20:14:30.875497] HotCache: LRU list offset : 36096 kiB 00:19:32.945 [2024-04-25 20:14:30.875503] HotCache: LRU list size : 148 kiB 00:19:32.945 [2024-04-25 20:14:30.875509] HotCache: Collision offset : 36352 kiB 00:19:32.945 [2024-04-25 20:14:30.875516] HotCache: Collision size : 196 kiB 00:19:32.945 [2024-04-25 20:14:30.875522] HotCache: List info offset : 36608 kiB 00:19:32.945 [2024-04-25 20:14:30.875528] HotCache: List info size : 148 kiB 00:19:32.945 [2024-04-25 20:14:30.875535] HotCache: Hash offset : 36864 kiB 00:19:32.945 [2024-04-25 20:14:30.875541] HotCache: Hash size : 20 kiB 00:19:32.945 [2024-04-25 20:14:30.875548] HotCache: Cache line size: 4 kiB 00:19:32.945 [2024-04-25 20:14:30.875557] HotCache: Metadata capacity: 18 MiB 00:19:33.204 [2024-04-25 20:14:30.885866] HotCache: Policy 'always' initialized successfully 00:19:33.204 [2024-04-25 20:14:30.999474] HotCache: Done saving cache state! 00:19:33.204 [2024-04-25 20:14:31.031165] HotCache: Cache attached 00:19:33.204 [2024-04-25 20:14:31.031261] HotCache: Successfully attached 00:19:33.204 [2024-04-25 20:14:31.031534] HotCache: Inserting core Malloc1 00:19:33.204 [2024-04-25 20:14:31.031559] HotCache.Malloc1: Seqential cutoff init 00:19:33.204 [2024-04-25 20:14:31.063173] HotCache.Malloc1: Successfully added 00:19:33.204 HotCache 00:19:33.204 20:14:31 -- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc0 00:19:33.204 20:14:31 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0 00:19:33.204 20:14:31 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:33.463 20:14:31 -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:19:33.463 20:14:31 -- management/create-destruct.sh@14 -- # return 0 00:19:33.463 20:14:31 -- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc1 00:19:33.463 20:14:31 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:19:33.463 20:14:31 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:33.722 20:14:31 -- management/create-destruct.sh@13 -- # '[' true = true ']' 00:19:33.722 20:14:31 -- management/create-destruct.sh@14 -- # return 0 00:19:33.722 20:14:31 -- management/create-destruct.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:33.981 [2024-04-25 20:14:31.779173] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'HotCache' because its cache device 'Malloc0' was removed 00:19:33.981 [2024-04-25 20:14:31.779431] HotCache: Flushing cache 00:19:33.981 [2024-04-25 20:14:31.779453] HotCache: Flushing cache completed 00:19:33.981 [2024-04-25 20:14:31.779535] HotCache: Stopping cache 00:19:33.981 [2024-04-25 20:14:31.887033] HotCache: Done saving cache state! 00:19:33.981 [2024-04-25 20:14:31.904177] Cache HotCache successfully stopped 00:19:34.240 20:14:31 -- management/create-destruct.sh@74 -- # bdev_check_claimed Malloc1 00:19:34.240 20:14:31 -- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1 00:19:34.240 20:14:31 -- management/create-destruct.sh@13 -- # jq '.[0].claimed' 00:19:34.500 20:14:32 -- management/create-destruct.sh@13 -- # '[' false = true ']' 00:19:34.500 20:14:32 -- management/create-destruct.sh@16 -- # return 1 00:19:34.500 20:14:32 -- management/create-destruct.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs 00:19:34.500 20:14:32 -- management/create-destruct.sh@79 -- # status='[ 00:19:34.500 { 00:19:34.500 "name": "Malloc1", 00:19:34.500 "aliases": [ 00:19:34.500 "c4084153-924c-402b-8342-671ff147a7bf" 00:19:34.500 ], 00:19:34.500 "product_name": "Malloc disk", 00:19:34.500 "block_size": 512, 00:19:34.500 "num_blocks": 206848, 00:19:34.500 "uuid": "c4084153-924c-402b-8342-671ff147a7bf", 00:19:34.500 "assigned_rate_limits": { 00:19:34.500 "rw_ios_per_sec": 0, 00:19:34.500 "rw_mbytes_per_sec": 0, 00:19:34.500 "r_mbytes_per_sec": 0, 00:19:34.500 "w_mbytes_per_sec": 0 00:19:34.500 }, 00:19:34.500 "claimed": false, 00:19:34.500 "zoned": false, 00:19:34.500 "supported_io_types": { 00:19:34.500 "read": true, 00:19:34.500 "write": true, 00:19:34.500 "unmap": true, 00:19:34.500 "write_zeroes": true, 00:19:34.500 "flush": true, 00:19:34.500 "reset": true, 00:19:34.500 "compare": false, 00:19:34.500 "compare_and_write": false, 00:19:34.500 "abort": true, 00:19:34.500 "nvme_admin": false, 00:19:34.500 "nvme_io": false 00:19:34.500 }, 00:19:34.500 "memory_domains": [ 00:19:34.500 { 00:19:34.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:34.500 "dma_device_type": 2 00:19:34.500 } 00:19:34.500 ], 00:19:34.500 "driver_specific": {} 00:19:34.500 } 00:19:34.500 ]' 00:19:34.500 20:14:32 -- management/create-destruct.sh@80 -- # echo '[' '{' '"name":' '"Malloc1",' '"aliases":' '[' '"c4084153-924c-402b-8342-671ff147a7bf"' '],' '"product_name":' '"Malloc' 'disk",' '"block_size":' 512, '"num_blocks":' 206848, '"uuid":' '"c4084153-924c-402b-8342-671ff147a7bf",' '"assigned_rate_limits":' '{' '"rw_ios_per_sec":' 0, '"rw_mbytes_per_sec":' 0, '"r_mbytes_per_sec":' 0, '"w_mbytes_per_sec":' 0 '},' '"claimed":' false, '"zoned":' false, '"supported_io_types":' '{' '"read":' true, '"write":' true, '"unmap":' true, '"write_zeroes":' true, '"flush":' true, '"reset":' true, '"compare":' false, '"compare_and_write":' false, '"abort":' true, '"nvme_admin":' false, '"nvme_io":' false '},' '"memory_domains":' '[' '{' '"dma_device_id":' '"SPDK_ACCEL_DMA_DEVICE",' '"dma_device_type":' 2 '}' '],' '"driver_specific":' '{}' '}' ']' 00:19:34.500 20:14:32 -- management/create-destruct.sh@80 -- # jq 'map(select(.name == "HotCache")) == []' 00:19:34.759 20:14:32 -- management/create-destruct.sh@80 -- # gone=true 00:19:34.759 20:14:32 -- management/create-destruct.sh@81 -- # [[ true == false ]] 00:19:34.759 20:14:32 -- management/create-destruct.sh@87 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt NonExisting Malloc1 00:19:34.759 [2024-04-25 20:14:32.680596] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for cache device 'NonExisting' to connect 00:19:34.759 PartCache 00:19:35.019 20:14:32 -- management/create-destruct.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:35.019 20:14:32 -- management/create-destruct.sh@91 -- # killprocess 2167386 00:19:35.019 20:14:32 -- common/autotest_common.sh@926 -- # '[' -z 2167386 ']' 00:19:35.019 20:14:32 -- common/autotest_common.sh@930 -- # kill -0 2167386 00:19:35.019 20:14:32 -- common/autotest_common.sh@931 -- # uname 00:19:35.019 20:14:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:35.019 20:14:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2167386 00:19:35.019 20:14:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:35.019 20:14:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:35.019 20:14:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2167386' 00:19:35.019 killing process with pid 2167386 00:19:35.019 20:14:32 -- common/autotest_common.sh@945 -- # kill 2167386 00:19:35.019 20:14:32 -- common/autotest_common.sh@950 -- # wait 2167386 00:19:35.019 [2024-04-25 20:14:32.911081] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc1'! 00:19:35.019 [2024-04-25 20:14:32.911177] vbdev_ocf.c:1361:hotremove_cb: *NOTICE*: Deinitializing 'PartCache' because its core device 'Malloc1' was removed 00:19:35.588 00:19:35.588 real 0m7.425s 00:19:35.588 user 0m11.656s 00:19:35.588 sys 0m1.491s 00:19:35.589 20:14:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:35.589 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.589 ************************************ 00:19:35.589 END TEST ocf_create_destruct 00:19:35.589 ************************************ 00:19:35.589 20:14:33 -- ocf/ocf.sh@16 -- # run_test ocf_multicore /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh 00:19:35.589 20:14:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:35.589 20:14:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:35.589 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.589 ************************************ 00:19:35.589 START TEST ocf_multicore 00:19:35.589 ************************************ 00:19:35.589 20:14:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh 00:19:35.589 20:14:33 -- management/multicore.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:35.589 20:14:33 -- management/multicore.sh@12 -- # spdk_pid='?' 00:19:35.589 20:14:33 -- management/multicore.sh@24 -- # start_spdk 00:19:35.589 20:14:33 -- management/multicore.sh@15 -- # spdk_pid=2168528 00:19:35.589 20:14:33 -- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:35.589 20:14:33 -- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:19:35.589 20:14:33 -- management/multicore.sh@17 -- # waitforlisten 2168528 00:19:35.589 20:14:33 -- common/autotest_common.sh@819 -- # '[' -z 2168528 ']' 00:19:35.589 20:14:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.589 20:14:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:35.589 20:14:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.589 20:14:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:35.589 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:35.589 [2024-04-25 20:14:33.510758] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:35.589 [2024-04-25 20:14:33.510834] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168528 ] 00:19:35.847 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.847 [2024-04-25 20:14:33.615783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.847 [2024-04-25 20:14:33.716770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.106 [2024-04-25 20:14:33.913065] 'OCF_Core' volume operations registered 00:19:36.106 [2024-04-25 20:14:33.916540] 'OCF_Cache' volume operations registered 00:19:36.106 [2024-04-25 20:14:33.920486] 'OCF Composite' volume operations registered 00:19:36.106 [2024-04-25 20:14:33.923996] 'SPDK_block_device' volume operations registered 00:19:36.674 20:14:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.674 20:14:34 -- common/autotest_common.sh@852 -- # return 0 00:19:36.674 20:14:34 -- management/multicore.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core0 00:19:36.932 Core0 00:19:36.932 20:14:34 -- management/multicore.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core1 00:19:37.191 Core1 00:19:37.191 20:14:34 -- management/multicore.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Core0 00:19:37.191 [2024-04-25 20:14:35.119311] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for cache device 'Cache' to connect 00:19:37.191 C1 00:19:37.450 20:14:35 -- management/multicore.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1 00:19:37.450 [2024-04-25 20:14:35.347944] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for cache device 'Cache' to connect 00:19:37.450 C2 00:19:37.450 20:14:35 -- management/multicore.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:37.450 20:14:35 -- management/multicore.sh@34 -- # jq -e 'any(select(.started)) == false' 00:19:37.709 true 00:19:37.709 20:14:35 -- management/multicore.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache 00:19:37.968 [2024-04-25 20:14:35.825935] Inserting cache C1 00:19:37.968 [2024-04-25 20:14:35.826305] C1: Metadata initialized 00:19:37.968 [2024-04-25 20:14:35.826771] C1: Successfully added 00:19:37.968 [2024-04-25 20:14:35.826786] C1: Cache mode : wt 00:19:37.968 [2024-04-25 20:14:35.826861] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:19:37.968 Cache 00:19:37.968 [2024-04-25 20:14:35.836775] C1: Super block config offset : 0 kiB 00:19:37.968 [2024-04-25 20:14:35.836798] C1: Super block config size : 2200 B 00:19:37.968 [2024-04-25 20:14:35.836805] C1: Super block runtime offset : 128 kiB 00:19:37.968 [2024-04-25 20:14:35.836811] C1: Super block runtime size : 4 B 00:19:37.968 [2024-04-25 20:14:35.836818] C1: Reserved offset : 256 kiB 00:19:37.968 [2024-04-25 20:14:35.836824] C1: Reserved size : 128 kiB 00:19:37.968 [2024-04-25 20:14:35.836831] C1: Part config offset : 384 kiB 00:19:37.968 [2024-04-25 20:14:35.836837] C1: Part config size : 48 kiB 00:19:37.968 [2024-04-25 20:14:35.836844] C1: Part runtime offset : 640 kiB 00:19:37.968 [2024-04-25 20:14:35.836850] C1: Part runtime size : 72 kiB 00:19:37.968 [2024-04-25 20:14:35.836857] C1: Core config offset : 768 kiB 00:19:37.968 [2024-04-25 20:14:35.836863] C1: Core config size : 512 kiB 00:19:37.968 [2024-04-25 20:14:35.836870] C1: Core runtime offset : 1792 kiB 00:19:37.968 [2024-04-25 20:14:35.836876] C1: Core runtime size : 1172 kiB 00:19:37.968 [2024-04-25 20:14:35.836883] C1: Core UUID offset : 3072 kiB 00:19:37.968 [2024-04-25 20:14:35.836889] C1: Core UUID size : 16384 kiB 00:19:37.968 [2024-04-25 20:14:35.836896] C1: Cleaning offset : 35840 kiB 00:19:37.968 [2024-04-25 20:14:35.836902] C1: Cleaning size : 196 kiB 00:19:37.968 [2024-04-25 20:14:35.836909] C1: LRU list offset : 36096 kiB 00:19:37.968 [2024-04-25 20:14:35.836915] C1: LRU list size : 148 kiB 00:19:37.968 [2024-04-25 20:14:35.836921] C1: Collision offset : 36352 kiB 00:19:37.968 [2024-04-25 20:14:35.836928] C1: Collision size : 196 kiB 00:19:37.968 [2024-04-25 20:14:35.836934] C1: List info offset : 36608 kiB 00:19:37.968 [2024-04-25 20:14:35.836941] C1: List info size : 148 kiB 00:19:37.968 [2024-04-25 20:14:35.836947] C1: Hash offset : 36864 kiB 00:19:37.968 [2024-04-25 20:14:35.836954] C1: Hash size : 20 kiB 00:19:37.968 [2024-04-25 20:14:35.836961] C1: Cache line size: 4 kiB 00:19:37.968 [2024-04-25 20:14:35.836969] C1: Metadata capacity: 18 MiB 00:19:37.968 [2024-04-25 20:14:35.846468] C1: Policy 'always' initialized successfully 00:19:37.968 20:14:35 -- management/multicore.sh@39 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:37.968 20:14:35 -- management/multicore.sh@39 -- # jq -e 'all(select(.started)) == true' 00:19:38.226 [2024-04-25 20:14:35.959869] C1: Done saving cache state! 00:19:38.226 [2024-04-25 20:14:35.991119] C1: Cache attached 00:19:38.226 [2024-04-25 20:14:35.991215] C1: Successfully attached 00:19:38.226 [2024-04-25 20:14:35.991500] C1: Inserting core Core1 00:19:38.226 [2024-04-25 20:14:35.991534] C1.Core1: Seqential cutoff init 00:19:38.226 [2024-04-25 20:14:36.022407] C1.Core1: Successfully added 00:19:38.226 [2024-04-25 20:14:36.023172] C1: Inserting core Core0 00:19:38.226 [2024-04-25 20:14:36.023203] C1.Core0: Seqential cutoff init 00:19:38.226 [2024-04-25 20:14:36.054541] C1.Core0: Successfully added 00:19:38.226 true 00:19:38.226 20:14:36 -- management/multicore.sh@43 -- # waitforbdev C2 00:19:38.226 20:14:36 -- common/autotest_common.sh@887 -- # local bdev_name=C2 00:19:38.226 20:14:36 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:19:38.226 20:14:36 -- common/autotest_common.sh@889 -- # local i 00:19:38.226 20:14:36 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:19:38.226 20:14:36 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:19:38.226 20:14:36 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:38.485 20:14:36 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b C2 -t 2000 00:19:38.743 [ 00:19:38.743 { 00:19:38.743 "name": "C2", 00:19:38.743 "aliases": [ 00:19:38.744 "05609690-9fd5-5bc6-9a8a-f3c893df31dc" 00:19:38.744 ], 00:19:38.744 "product_name": "SPDK OCF", 00:19:38.744 "block_size": 512, 00:19:38.744 "num_blocks": 2048, 00:19:38.744 "uuid": "05609690-9fd5-5bc6-9a8a-f3c893df31dc", 00:19:38.744 "assigned_rate_limits": { 00:19:38.744 "rw_ios_per_sec": 0, 00:19:38.744 "rw_mbytes_per_sec": 0, 00:19:38.744 "r_mbytes_per_sec": 0, 00:19:38.744 "w_mbytes_per_sec": 0 00:19:38.744 }, 00:19:38.744 "claimed": false, 00:19:38.744 "zoned": false, 00:19:38.744 "supported_io_types": { 00:19:38.744 "read": true, 00:19:38.744 "write": true, 00:19:38.744 "unmap": true, 00:19:38.744 "write_zeroes": true, 00:19:38.744 "flush": true, 00:19:38.744 "reset": false, 00:19:38.744 "compare": false, 00:19:38.744 "compare_and_write": false, 00:19:38.744 "abort": false, 00:19:38.744 "nvme_admin": false, 00:19:38.744 "nvme_io": false 00:19:38.744 }, 00:19:38.744 "driver_specific": { 00:19:38.744 "cache_device": "Cache", 00:19:38.744 "core_device": "Core1", 00:19:38.744 "mode": "wt", 00:19:38.744 "cache_line_size": 4, 00:19:38.744 "metadata_volatile": false 00:19:38.744 } 00:19:38.744 } 00:19:38.744 ] 00:19:38.744 20:14:36 -- common/autotest_common.sh@895 -- # return 0 00:19:38.744 20:14:36 -- management/multicore.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete C2 00:19:39.003 [2024-04-25 20:14:36.762484] C1: Flushing cache 00:19:39.003 [2024-04-25 20:14:36.762521] C1: Flushing cache completed 00:19:39.003 [2024-04-25 20:14:36.763540] C1.Core1: Removing core 00:19:39.003 [2024-04-25 20:14:36.796904] C1: Core Core1 successfully removed 00:19:39.003 [2024-04-25 20:14:36.796957] vbdev_ocf.c: 299:stop_vbdev: *NOTICE*: Not stopping cache instance 'Cache' because it is referenced by other OCF bdev 00:19:39.003 20:14:36 -- management/multicore.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C1 00:19:39.003 20:14:36 -- management/multicore.sh@49 -- # jq -e '.[0] | .started' 00:19:39.262 true 00:19:39.262 20:14:37 -- management/multicore.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1 00:19:39.520 [2024-04-25 20:14:37.259992] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:19:39.520 [2024-04-25 20:14:37.260242] C1: Inserting core Core1 00:19:39.520 [2024-04-25 20:14:37.260266] C1.Core1: Seqential cutoff init 00:19:39.520 [2024-04-25 20:14:37.293929] C1.Core1: Successfully added 00:19:39.520 C2 00:19:39.520 20:14:37 -- management/multicore.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C2 00:19:39.520 20:14:37 -- management/multicore.sh@54 -- # jq -e '.[0] | .started' 00:19:39.779 true 00:19:39.779 20:14:37 -- management/multicore.sh@59 -- # stop_spdk 00:19:39.779 20:14:37 -- management/multicore.sh@20 -- # killprocess 2168528 00:19:39.779 20:14:37 -- common/autotest_common.sh@926 -- # '[' -z 2168528 ']' 00:19:39.779 20:14:37 -- common/autotest_common.sh@930 -- # kill -0 2168528 00:19:39.779 20:14:37 -- common/autotest_common.sh@931 -- # uname 00:19:39.779 20:14:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:39.779 20:14:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2168528 00:19:39.779 20:14:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:39.779 20:14:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:39.779 20:14:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2168528' 00:19:39.779 killing process with pid 2168528 00:19:39.779 20:14:37 -- common/autotest_common.sh@945 -- # kill 2168528 00:19:39.779 20:14:37 -- common/autotest_common.sh@950 -- # wait 2168528 00:19:40.037 [2024-04-25 20:14:37.738944] C1: Flushing cache 00:19:40.037 [2024-04-25 20:14:37.738991] C1: Flushing cache completed 00:19:40.037 [2024-04-25 20:14:37.739046] C1: Stopping cache 00:19:40.037 [2024-04-25 20:14:37.846312] C1: Done saving cache state! 00:19:40.037 [2024-04-25 20:14:37.863401] Cache C1 successfully stopped 00:19:40.606 20:14:38 -- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT 00:19:40.606 20:14:38 -- management/multicore.sh@62 -- # start_spdk 00:19:40.606 20:14:38 -- management/multicore.sh@15 -- # spdk_pid=2169100 00:19:40.606 20:14:38 -- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:19:40.606 20:14:38 -- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:19:40.606 20:14:38 -- management/multicore.sh@17 -- # waitforlisten 2169100 00:19:40.606 20:14:38 -- common/autotest_common.sh@819 -- # '[' -z 2169100 ']' 00:19:40.606 20:14:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.606 20:14:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:40.606 20:14:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.606 20:14:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:40.606 20:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:40.606 [2024-04-25 20:14:38.299960] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:40.606 [2024-04-25 20:14:38.300033] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169100 ] 00:19:40.606 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.606 [2024-04-25 20:14:38.407382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.606 [2024-04-25 20:14:38.512037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.866 [2024-04-25 20:14:38.716980] 'OCF_Core' volume operations registered 00:19:40.866 [2024-04-25 20:14:38.720449] 'OCF_Cache' volume operations registered 00:19:40.866 [2024-04-25 20:14:38.724396] 'OCF Composite' volume operations registered 00:19:40.866 [2024-04-25 20:14:38.727898] 'SPDK_block_device' volume operations registered 00:19:41.434 20:14:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:41.434 20:14:39 -- common/autotest_common.sh@852 -- # return 0 00:19:41.434 20:14:39 -- management/multicore.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache 00:19:41.694 Cache 00:19:41.694 20:14:39 -- management/multicore.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc 00:19:41.953 Malloc 00:19:41.953 20:14:39 -- management/multicore.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core 00:19:42.212 Core 00:19:42.212 20:14:39 -- management/multicore.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Malloc 00:19:42.471 [2024-04-25 20:14:40.170902] Inserting cache C1 00:19:42.471 [2024-04-25 20:14:40.171351] C1: Metadata initialized 00:19:42.471 [2024-04-25 20:14:40.171794] C1: Successfully added 00:19:42.471 [2024-04-25 20:14:40.171810] C1: Cache mode : wt 00:19:42.471 [2024-04-25 20:14:40.182070] C1: Super block config offset : 0 kiB 00:19:42.471 [2024-04-25 20:14:40.182092] C1: Super block config size : 2200 B 00:19:42.471 [2024-04-25 20:14:40.182099] C1: Super block runtime offset : 128 kiB 00:19:42.471 [2024-04-25 20:14:40.182105] C1: Super block runtime size : 4 B 00:19:42.471 [2024-04-25 20:14:40.182112] C1: Reserved offset : 256 kiB 00:19:42.471 [2024-04-25 20:14:40.182118] C1: Reserved size : 128 kiB 00:19:42.471 [2024-04-25 20:14:40.182125] C1: Part config offset : 384 kiB 00:19:42.471 [2024-04-25 20:14:40.182132] C1: Part config size : 48 kiB 00:19:42.471 [2024-04-25 20:14:40.182138] C1: Part runtime offset : 640 kiB 00:19:42.471 [2024-04-25 20:14:40.182145] C1: Part runtime size : 72 kiB 00:19:42.471 [2024-04-25 20:14:40.182151] C1: Core config offset : 768 kiB 00:19:42.471 [2024-04-25 20:14:40.182157] C1: Core config size : 512 kiB 00:19:42.471 [2024-04-25 20:14:40.182164] C1: Core runtime offset : 1792 kiB 00:19:42.471 [2024-04-25 20:14:40.182170] C1: Core runtime size : 1172 kiB 00:19:42.471 [2024-04-25 20:14:40.182177] C1: Core UUID offset : 3072 kiB 00:19:42.471 [2024-04-25 20:14:40.182183] C1: Core UUID size : 16384 kiB 00:19:42.471 [2024-04-25 20:14:40.182190] C1: Cleaning offset : 35840 kiB 00:19:42.471 [2024-04-25 20:14:40.182196] C1: Cleaning size : 196 kiB 00:19:42.471 [2024-04-25 20:14:40.182203] C1: LRU list offset : 36096 kiB 00:19:42.471 [2024-04-25 20:14:40.182209] C1: LRU list size : 148 kiB 00:19:42.471 [2024-04-25 20:14:40.182215] C1: Collision offset : 36352 kiB 00:19:42.471 [2024-04-25 20:14:40.182222] C1: Collision size : 196 kiB 00:19:42.471 [2024-04-25 20:14:40.182228] C1: List info offset : 36608 kiB 00:19:42.471 [2024-04-25 20:14:40.182234] C1: List info size : 148 kiB 00:19:42.471 [2024-04-25 20:14:40.182241] C1: Hash offset : 36864 kiB 00:19:42.471 [2024-04-25 20:14:40.182247] C1: Hash size : 20 kiB 00:19:42.471 [2024-04-25 20:14:40.182254] C1: Cache line size: 4 kiB 00:19:42.471 [2024-04-25 20:14:40.182263] C1: Metadata capacity: 18 MiB 00:19:42.471 [2024-04-25 20:14:40.192170] C1: Policy 'always' initialized successfully 00:19:42.471 [2024-04-25 20:14:40.306777] C1: Done saving cache state! 00:19:42.471 [2024-04-25 20:14:40.338813] C1: Cache attached 00:19:42.471 [2024-04-25 20:14:40.338910] C1: Successfully attached 00:19:42.471 [2024-04-25 20:14:40.339199] C1: Inserting core Malloc 00:19:42.471 [2024-04-25 20:14:40.339235] C1.Malloc: Seqential cutoff init 00:19:42.471 [2024-04-25 20:14:40.370863] C1.Malloc: Successfully added 00:19:42.471 C1 00:19:42.471 20:14:40 -- management/multicore.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core 00:19:42.731 [2024-04-25 20:14:40.601301] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache 00:19:42.731 [2024-04-25 20:14:40.601557] C1: Inserting core Core 00:19:42.731 [2024-04-25 20:14:40.601583] C1.Core: Seqential cutoff init 00:19:42.731 [2024-04-25 20:14:40.635068] C1.Core: Successfully added 00:19:42.731 C2 00:19:42.731 20:14:40 -- management/multicore.sh@71 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs Cache 00:19:42.731 20:14:40 -- management/multicore.sh@72 -- # jq 'length == 2' 00:19:42.990 true 00:19:42.990 20:14:40 -- management/multicore.sh@74 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Cache 00:19:43.248 [2024-04-25 20:14:41.097870] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Cache' was removed 00:19:43.248 [2024-04-25 20:14:41.097915] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Cache' was removed 00:19:43.248 [2024-04-25 20:14:41.098145] C1: Flushing cache 00:19:43.248 [2024-04-25 20:14:41.098164] C1: Flushing cache completed 00:19:43.248 [2024-04-25 20:14:41.098450] C1: Flushing cache 00:19:43.248 [2024-04-25 20:14:41.098461] C1: Flushing cache completed 00:19:43.248 [2024-04-25 20:14:41.098554] C1: Stopping cache 00:19:43.507 [2024-04-25 20:14:41.206552] C1: Done saving cache state! 00:19:43.507 [2024-04-25 20:14:41.223195] Cache C1 successfully stopped 00:19:43.507 20:14:41 -- management/multicore.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:43.507 20:14:41 -- management/multicore.sh@76 -- # jq -e '. == []' 00:19:43.764 true 00:19:43.764 20:14:41 -- management/multicore.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Malloc NonExisting 00:19:44.023 [2024-04-25 20:14:41.728680] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for core device 'NonExisting' to connect 00:19:44.023 C1 00:19:44.023 20:14:41 -- management/multicore.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Malloc NonExisting 00:19:44.281 [2024-04-25 20:14:41.965337] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for core device 'NonExisting' to connect 00:19:44.281 C2 00:19:44.281 20:14:41 -- management/multicore.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C3 wt Malloc Core 00:19:44.281 [2024-04-25 20:14:42.202022] Inserting cache C3 00:19:44.281 [2024-04-25 20:14:42.202470] C3: Metadata initialized 00:19:44.281 [2024-04-25 20:14:42.202896] C3: Successfully added 00:19:44.281 [2024-04-25 20:14:42.202905] C3: Cache mode : wt 00:19:44.281 [2024-04-25 20:14:42.213483] C3: Super block config offset : 0 kiB 00:19:44.281 [2024-04-25 20:14:42.213507] C3: Super block config size : 2200 B 00:19:44.281 [2024-04-25 20:14:42.213514] C3: Super block runtime offset : 128 kiB 00:19:44.281 [2024-04-25 20:14:42.213521] C3: Super block runtime size : 4 B 00:19:44.281 [2024-04-25 20:14:42.213528] C3: Reserved offset : 256 kiB 00:19:44.281 [2024-04-25 20:14:42.213534] C3: Reserved size : 128 kiB 00:19:44.281 [2024-04-25 20:14:42.213541] C3: Part config offset : 384 kiB 00:19:44.281 [2024-04-25 20:14:42.213547] C3: Part config size : 48 kiB 00:19:44.281 [2024-04-25 20:14:42.213554] C3: Part runtime offset : 640 kiB 00:19:44.281 [2024-04-25 20:14:42.213560] C3: Part runtime size : 72 kiB 00:19:44.281 [2024-04-25 20:14:42.213567] C3: Core config offset : 768 kiB 00:19:44.281 [2024-04-25 20:14:42.213573] C3: Core config size : 512 kiB 00:19:44.281 [2024-04-25 20:14:42.213580] C3: Core runtime offset : 1792 kiB 00:19:44.281 [2024-04-25 20:14:42.213586] C3: Core runtime size : 1172 kiB 00:19:44.281 [2024-04-25 20:14:42.213592] C3: Core UUID offset : 3072 kiB 00:19:44.281 [2024-04-25 20:14:42.213599] C3: Core UUID size : 16384 kiB 00:19:44.281 [2024-04-25 20:14:42.213605] C3: Cleaning offset : 35840 kiB 00:19:44.281 [2024-04-25 20:14:42.213612] C3: Cleaning size : 196 kiB 00:19:44.281 [2024-04-25 20:14:42.213618] C3: LRU list offset : 36096 kiB 00:19:44.281 [2024-04-25 20:14:42.213625] C3: LRU list size : 148 kiB 00:19:44.281 [2024-04-25 20:14:42.213646] C3: Collision offset : 36352 kiB 00:19:44.281 [2024-04-25 20:14:42.213653] C3: Collision size : 196 kiB 00:19:44.281 [2024-04-25 20:14:42.213659] C3: List info offset : 36608 kiB 00:19:44.281 [2024-04-25 20:14:42.213666] C3: List info size : 148 kiB 00:19:44.281 [2024-04-25 20:14:42.213672] C3: Hash offset : 36864 kiB 00:19:44.281 [2024-04-25 20:14:42.213679] C3: Hash size : 20 kiB 00:19:44.281 [2024-04-25 20:14:42.213685] C3: Cache line size: 4 kiB 00:19:44.281 [2024-04-25 20:14:42.213694] C3: Metadata capacity: 18 MiB 00:19:44.540 [2024-04-25 20:14:42.223791] C3: Policy 'always' initialized successfully 00:19:44.540 [2024-04-25 20:14:42.338297] C3: Done saving cache state! 00:19:44.540 [2024-04-25 20:14:42.370172] C3: Cache attached 00:19:44.540 [2024-04-25 20:14:42.370269] C3: Successfully attached 00:19:44.540 [2024-04-25 20:14:42.370560] C3: Inserting core Core 00:19:44.540 [2024-04-25 20:14:42.370594] C3.Core: Seqential cutoff init 00:19:44.540 [2024-04-25 20:14:42.402169] C3.Core: Successfully added 00:19:44.540 C3 00:19:44.540 20:14:42 -- management/multicore.sh@85 -- # stop_spdk 00:19:44.540 20:14:42 -- management/multicore.sh@20 -- # killprocess 2169100 00:19:44.540 20:14:42 -- common/autotest_common.sh@926 -- # '[' -z 2169100 ']' 00:19:44.540 20:14:42 -- common/autotest_common.sh@930 -- # kill -0 2169100 00:19:44.540 20:14:42 -- common/autotest_common.sh@931 -- # uname 00:19:44.540 20:14:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.540 20:14:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169100 00:19:44.540 20:14:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:44.540 20:14:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:44.540 20:14:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169100' 00:19:44.540 killing process with pid 2169100 00:19:44.540 20:14:42 -- common/autotest_common.sh@945 -- # kill 2169100 00:19:44.540 20:14:42 -- common/autotest_common.sh@950 -- # wait 2169100 00:19:44.799 [2024-04-25 20:14:42.632008] C3: Flushing cache 00:19:44.799 [2024-04-25 20:14:42.632059] C3: Flushing cache completed 00:19:44.799 [2024-04-25 20:14:42.632109] C3: Stopping cache 00:19:45.057 [2024-04-25 20:14:42.740852] C3: Done saving cache state! 00:19:45.057 [2024-04-25 20:14:42.759389] Cache C3 successfully stopped 00:19:45.057 [2024-04-25 20:14:42.761285] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc'! 00:19:45.057 [2024-04-25 20:14:42.761341] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Malloc' was removed 00:19:45.057 [2024-04-25 20:14:42.761359] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Malloc' was removed 00:19:45.317 20:14:43 -- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT 00:19:45.317 00:19:45.317 real 0m9.813s 00:19:45.317 user 0m14.256s 00:19:45.317 sys 0m2.134s 00:19:45.317 20:14:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:45.317 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:19:45.317 ************************************ 00:19:45.317 END TEST ocf_multicore 00:19:45.317 ************************************ 00:19:45.317 20:14:43 -- ocf/ocf.sh@17 -- # run_test ocf_remove /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh 00:19:45.317 20:14:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:45.317 20:14:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:45.317 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:19:45.317 ************************************ 00:19:45.317 START TEST ocf_remove 00:19:45.317 ************************************ 00:19:45.317 20:14:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh 00:19:45.575 20:14:43 -- management/remove.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:45.575 20:14:43 -- management/remove.sh@12 -- # rm -f 00:19:45.575 20:14:43 -- management/remove.sh@13 -- # truncate -s 128M aio0 00:19:45.575 20:14:43 -- management/remove.sh@14 -- # truncate -s 128M aio1 00:19:45.575 20:14:43 -- management/remove.sh@16 -- # jq . 00:19:45.575 20:14:43 -- management/remove.sh@48 -- # spdk_pid=2169865 00:19:45.575 20:14:43 -- management/remove.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config 00:19:45.575 20:14:43 -- management/remove.sh@50 -- # waitforlisten 2169865 00:19:45.575 20:14:43 -- common/autotest_common.sh@819 -- # '[' -z 2169865 ']' 00:19:45.575 20:14:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.575 20:14:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:45.575 20:14:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.575 20:14:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:45.575 20:14:43 -- common/autotest_common.sh@10 -- # set +x 00:19:45.575 [2024-04-25 20:14:43.416804] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:45.575 [2024-04-25 20:14:43.416884] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2169865 ] 00:19:45.575 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.834 [2024-04-25 20:14:43.522546] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.834 [2024-04-25 20:14:43.623102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.093 [2024-04-25 20:14:43.802504] 'OCF_Core' volume operations registered 00:19:46.093 [2024-04-25 20:14:43.805712] 'OCF_Cache' volume operations registered 00:19:46.093 [2024-04-25 20:14:43.809334] 'OCF Composite' volume operations registered 00:19:46.093 [2024-04-25 20:14:43.812574] 'SPDK_block_device' volume operations registered 00:19:46.661 20:14:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:46.661 20:14:44 -- common/autotest_common.sh@852 -- # return 0 00:19:46.661 20:14:44 -- management/remove.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create ocfWT wt aio0 aio1 00:19:46.661 [2024-04-25 20:14:44.469046] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'ocfWT' is waiting for cache device 'aio0' to connect 00:19:46.661 ocfWT 00:19:46.661 20:14:44 -- management/remove.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:46.661 20:14:44 -- management/remove.sh@58 -- # jq -r '.[] .name' 00:19:46.661 20:14:44 -- management/remove.sh@58 -- # grep -qw ocfWT 00:19:46.920 20:14:44 -- management/remove.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete ocfWT 00:19:47.178 20:14:44 -- management/remove.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:47.179 20:14:44 -- management/remove.sh@66 -- # jq -r '.[] | select(.name == "ocfWT") | .name' 00:19:47.437 20:14:45 -- management/remove.sh@66 -- # [[ -z '' ]] 00:19:47.437 20:14:45 -- management/remove.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:47.437 20:14:45 -- management/remove.sh@70 -- # killprocess 2169865 00:19:47.437 20:14:45 -- common/autotest_common.sh@926 -- # '[' -z 2169865 ']' 00:19:47.437 20:14:45 -- common/autotest_common.sh@930 -- # kill -0 2169865 00:19:47.437 20:14:45 -- common/autotest_common.sh@931 -- # uname 00:19:47.437 20:14:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:47.437 20:14:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169865 00:19:47.437 20:14:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:47.437 20:14:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:47.437 20:14:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169865' 00:19:47.437 killing process with pid 2169865 00:19:47.437 20:14:45 -- common/autotest_common.sh@945 -- # kill 2169865 00:19:47.437 20:14:45 -- common/autotest_common.sh@950 -- # wait 2169865 00:19:48.005 20:14:45 -- management/remove.sh@74 -- # spdk_pid=2170230 00:19:48.005 20:14:45 -- management/remove.sh@73 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config 00:19:48.005 20:14:45 -- management/remove.sh@76 -- # trap 'killprocess $spdk_pid; rm -f aio* $curdir/config ocf_bdevs ocf_bdevs_verify; exit 1' SIGINT SIGTERM EXIT 00:19:48.005 20:14:45 -- management/remove.sh@78 -- # waitforlisten 2170230 00:19:48.005 20:14:45 -- common/autotest_common.sh@819 -- # '[' -z 2170230 ']' 00:19:48.005 20:14:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.005 20:14:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:48.005 20:14:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.005 20:14:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:48.005 20:14:45 -- common/autotest_common.sh@10 -- # set +x 00:19:48.005 [2024-04-25 20:14:45.789959] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:48.005 [2024-04-25 20:14:45.790034] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170230 ] 00:19:48.005 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.006 [2024-04-25 20:14:45.895524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.265 [2024-04-25 20:14:46.000884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.523 [2024-04-25 20:14:46.200783] 'OCF_Core' volume operations registered 00:19:48.523 [2024-04-25 20:14:46.204266] 'OCF_Cache' volume operations registered 00:19:48.523 [2024-04-25 20:14:46.208199] 'OCF Composite' volume operations registered 00:19:48.523 [2024-04-25 20:14:46.211724] 'SPDK_block_device' volume operations registered 00:19:48.781 20:14:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:48.781 20:14:46 -- common/autotest_common.sh@852 -- # return 0 00:19:48.781 20:14:46 -- management/remove.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:48.781 20:14:46 -- management/remove.sh@82 -- # jq -r '.[] | select(name == "ocfWT") | .name' 00:19:49.038 jq: error: name/0 is not defined at , line 1: 00:19:49.038 .[] | select(name == "ocfWT") | .name 00:19:49.039 jq: 1 compile error 00:19:49.039 Exception ignored in: <_io.TextIOWrapper name='' mode='w' encoding='utf-8'> 00:19:49.039 BrokenPipeError: [Errno 32] Broken pipe 00:19:49.039 20:14:46 -- management/remove.sh@82 -- # trap - ERR 00:19:49.039 20:14:46 -- management/remove.sh@82 -- # print_backtrace 00:19:49.039 20:14:46 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:19:49.039 20:14:46 -- common/autotest_common.sh@1132 -- # return 0 00:19:49.039 20:14:46 -- management/remove.sh@82 -- # [[ -z '' ]] 00:19:49.039 20:14:46 -- management/remove.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:19:49.039 20:14:46 -- management/remove.sh@86 -- # killprocess 2170230 00:19:49.039 20:14:46 -- common/autotest_common.sh@926 -- # '[' -z 2170230 ']' 00:19:49.039 20:14:46 -- common/autotest_common.sh@930 -- # kill -0 2170230 00:19:49.039 20:14:46 -- common/autotest_common.sh@931 -- # uname 00:19:49.039 20:14:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:49.039 20:14:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170230 00:19:49.297 20:14:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:49.297 20:14:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:49.297 20:14:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170230' 00:19:49.297 killing process with pid 2170230 00:19:49.297 20:14:46 -- common/autotest_common.sh@945 -- # kill 2170230 00:19:49.297 20:14:46 -- common/autotest_common.sh@950 -- # wait 2170230 00:19:49.864 20:14:47 -- management/remove.sh@87 -- # rm -f aio0 aio1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config ocf_bdevs ocf_bdevs_verify 00:19:49.864 00:19:49.864 real 0m4.263s 00:19:49.864 user 0m5.034s 00:19:49.864 sys 0m1.254s 00:19:49.864 20:14:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:49.865 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.865 ************************************ 00:19:49.865 END TEST ocf_remove 00:19:49.865 ************************************ 00:19:49.865 20:14:47 -- ocf/ocf.sh@18 -- # run_test ocf_configuration_change /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh 00:19:49.865 20:14:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:49.865 20:14:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:49.865 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.865 ************************************ 00:19:49.865 START TEST ocf_configuration_change 00:19:49.865 ************************************ 00:19:49.865 20:14:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh 00:19:49.865 20:14:47 -- management/configuration-change.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py 00:19:49.865 20:14:47 -- management/configuration-change.sh@11 -- # cache_line_sizes=(4 8 16 32 64) 00:19:49.865 20:14:47 -- management/configuration-change.sh@12 -- # cache_modes=(wt wb pt wa wi wo) 00:19:49.865 20:14:47 -- management/configuration-change.sh@15 -- # spdk_pid=2170461 00:19:49.865 20:14:47 -- management/configuration-change.sh@17 -- # waitforlisten 2170461 00:19:49.865 20:14:47 -- management/configuration-change.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt 00:19:49.865 20:14:47 -- common/autotest_common.sh@819 -- # '[' -z 2170461 ']' 00:19:49.865 20:14:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.865 20:14:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.865 20:14:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.865 20:14:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.865 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:49.865 [2024-04-25 20:14:47.675055] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:19:49.865 [2024-04-25 20:14:47.675131] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170461 ] 00:19:49.865 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.865 [2024-04-25 20:14:47.781983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.123 [2024-04-25 20:14:47.879004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.382 [2024-04-25 20:14:48.075338] 'OCF_Core' volume operations registered 00:19:50.382 [2024-04-25 20:14:48.078849] 'OCF_Cache' volume operations registered 00:19:50.382 [2024-04-25 20:14:48.082803] 'OCF Composite' volume operations registered 00:19:50.382 [2024-04-25 20:14:48.086300] 'SPDK_block_device' volume operations registered 00:19:50.641 20:14:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.641 20:14:48 -- common/autotest_common.sh@852 -- # return 0 00:19:50.641 20:14:48 -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:19:50.641 20:14:48 -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:19:50.900 Malloc0 00:19:50.901 20:14:48 -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:19:51.159 Malloc1 00:19:51.159 20:14:48 -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 4 00:19:51.419 [2024-04-25 20:14:49.210665] Inserting cache Cache0 00:19:51.419 [2024-04-25 20:14:49.211092] Cache0: Metadata initialized 00:19:51.419 [2024-04-25 20:14:49.211527] Cache0: Successfully added 00:19:51.419 [2024-04-25 20:14:49.211542] Cache0: Cache mode : wt 00:19:51.419 [2024-04-25 20:14:49.221512] Cache0: Super block config offset : 0 kiB 00:19:51.419 [2024-04-25 20:14:49.221537] Cache0: Super block config size : 2200 B 00:19:51.419 [2024-04-25 20:14:49.221544] Cache0: Super block runtime offset : 128 kiB 00:19:51.419 [2024-04-25 20:14:49.221551] Cache0: Super block runtime size : 4 B 00:19:51.419 [2024-04-25 20:14:49.221558] Cache0: Reserved offset : 256 kiB 00:19:51.419 [2024-04-25 20:14:49.221564] Cache0: Reserved size : 128 kiB 00:19:51.419 [2024-04-25 20:14:49.221571] Cache0: Part config offset : 384 kiB 00:19:51.419 [2024-04-25 20:14:49.221578] Cache0: Part config size : 48 kiB 00:19:51.419 [2024-04-25 20:14:49.221584] Cache0: Part runtime offset : 640 kiB 00:19:51.419 [2024-04-25 20:14:49.221591] Cache0: Part runtime size : 72 kiB 00:19:51.419 [2024-04-25 20:14:49.221597] Cache0: Core config offset : 768 kiB 00:19:51.419 [2024-04-25 20:14:49.221604] Cache0: Core config size : 512 kiB 00:19:51.419 [2024-04-25 20:14:49.221610] Cache0: Core runtime offset : 1792 kiB 00:19:51.419 [2024-04-25 20:14:49.221617] Cache0: Core runtime size : 1172 kiB 00:19:51.419 [2024-04-25 20:14:49.221623] Cache0: Core UUID offset : 3072 kiB 00:19:51.419 [2024-04-25 20:14:49.221630] Cache0: Core UUID size : 16384 kiB 00:19:51.419 [2024-04-25 20:14:49.221644] Cache0: Cleaning offset : 35840 kiB 00:19:51.419 [2024-04-25 20:14:49.221650] Cache0: Cleaning size : 196 kiB 00:19:51.419 [2024-04-25 20:14:49.221657] Cache0: LRU list offset : 36096 kiB 00:19:51.419 [2024-04-25 20:14:49.221663] Cache0: LRU list size : 148 kiB 00:19:51.419 [2024-04-25 20:14:49.221670] Cache0: Collision offset : 36352 kiB 00:19:51.419 [2024-04-25 20:14:49.221676] Cache0: Collision size : 196 kiB 00:19:51.419 [2024-04-25 20:14:49.221682] Cache0: List info offset : 36608 kiB 00:19:51.419 [2024-04-25 20:14:49.221689] Cache0: List info size : 148 kiB 00:19:51.419 [2024-04-25 20:14:49.221695] Cache0: Hash offset : 36864 kiB 00:19:51.419 [2024-04-25 20:14:49.221702] Cache0: Hash size : 20 kiB 00:19:51.419 [2024-04-25 20:14:49.221709] Cache0: Cache line size: 4 kiB 00:19:51.419 [2024-04-25 20:14:49.221717] Cache0: Metadata capacity: 18 MiB 00:19:51.419 [2024-04-25 20:14:49.231455] Cache0: Policy 'always' initialized successfully 00:19:51.419 [2024-04-25 20:14:49.344410] Cache0: Done saving cache state! 00:19:51.678 [2024-04-25 20:14:49.375578] Cache0: Cache attached 00:19:51.678 [2024-04-25 20:14:49.375675] Cache0: Successfully attached 00:19:51.678 [2024-04-25 20:14:49.375953] Cache0: Inserting core Malloc1 00:19:51.678 [2024-04-25 20:14:49.375986] Cache0.Malloc1: Seqential cutoff init 00:19:51.678 [2024-04-25 20:14:49.406799] Cache0.Malloc1: Successfully added 00:19:51.678 Cache0 00:19:51.678 20:14:49 -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:51.678 20:14:49 -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:19:51.937 true 00:19:51.937 20:14:49 -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:19:51.937 20:14:49 -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 4' 00:19:52.196 true 00:19:52.196 20:14:49 -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:52.196 20:14:49 -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 4' 00:19:52.455 true 00:19:52.455 20:14:50 -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:19:52.455 [2024-04-25 20:14:50.355711] Cache0: Flushing cache 00:19:52.455 [2024-04-25 20:14:50.355749] Cache0: Flushing cache completed 00:19:52.455 [2024-04-25 20:14:50.356774] Cache0.Malloc1: Removing core 00:19:52.455 [2024-04-25 20:14:50.390058] Cache0: Core Malloc1 successfully removed 00:19:52.455 [2024-04-25 20:14:50.390112] Cache0: Stopping cache 00:19:52.716 [2024-04-25 20:14:50.496728] Cache0: Done saving cache state! 00:19:52.716 [2024-04-25 20:14:50.514890] Cache Cache0 successfully stopped 00:19:52.716 20:14:50 -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:52.995 20:14:50 -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:53.264 20:14:51 -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:19:53.264 20:14:51 -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:19:53.523 Malloc0 00:19:53.523 20:14:51 -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:19:53.783 Malloc1 00:19:53.783 20:14:51 -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 8 00:19:54.043 [2024-04-25 20:14:51.732342] Inserting cache Cache0 00:19:54.043 [2024-04-25 20:14:51.732815] Cache0: Metadata initialized 00:19:54.043 [2024-04-25 20:14:51.733238] Cache0: Successfully added 00:19:54.043 [2024-04-25 20:14:51.733247] Cache0: Cache mode : wt 00:19:54.043 [2024-04-25 20:14:51.743952] Cache0: Super block config offset : 0 kiB 00:19:54.043 [2024-04-25 20:14:51.743991] Cache0: Super block config size : 2200 B 00:19:54.043 [2024-04-25 20:14:51.743998] Cache0: Super block runtime offset : 128 kiB 00:19:54.043 [2024-04-25 20:14:51.744005] Cache0: Super block runtime size : 4 B 00:19:54.043 [2024-04-25 20:14:51.744012] Cache0: Reserved offset : 256 kiB 00:19:54.043 [2024-04-25 20:14:51.744018] Cache0: Reserved size : 128 kiB 00:19:54.043 [2024-04-25 20:14:51.744025] Cache0: Part config offset : 384 kiB 00:19:54.043 [2024-04-25 20:14:51.744032] Cache0: Part config size : 48 kiB 00:19:54.043 [2024-04-25 20:14:51.744038] Cache0: Part runtime offset : 640 kiB 00:19:54.043 [2024-04-25 20:14:51.744045] Cache0: Part runtime size : 72 kiB 00:19:54.043 [2024-04-25 20:14:51.744051] Cache0: Core config offset : 768 kiB 00:19:54.043 [2024-04-25 20:14:51.744058] Cache0: Core config size : 512 kiB 00:19:54.043 [2024-04-25 20:14:51.744064] Cache0: Core runtime offset : 1792 kiB 00:19:54.043 [2024-04-25 20:14:51.744070] Cache0: Core runtime size : 1172 kiB 00:19:54.043 [2024-04-25 20:14:51.744077] Cache0: Core UUID offset : 3072 kiB 00:19:54.043 [2024-04-25 20:14:51.744083] Cache0: Core UUID size : 16384 kiB 00:19:54.043 [2024-04-25 20:14:51.744090] Cache0: Cleaning offset : 35840 kiB 00:19:54.043 [2024-04-25 20:14:51.744096] Cache0: Cleaning size : 100 kiB 00:19:54.043 [2024-04-25 20:14:51.744103] Cache0: LRU list offset : 35968 kiB 00:19:54.043 [2024-04-25 20:14:51.744109] Cache0: LRU list size : 76 kiB 00:19:54.043 [2024-04-25 20:14:51.744116] Cache0: Collision offset : 36096 kiB 00:19:54.043 [2024-04-25 20:14:51.744122] Cache0: Collision size : 116 kiB 00:19:54.043 [2024-04-25 20:14:51.744129] Cache0: List info offset : 36224 kiB 00:19:54.043 [2024-04-25 20:14:51.744135] Cache0: List info size : 76 kiB 00:19:54.043 [2024-04-25 20:14:51.744141] Cache0: Hash offset : 36352 kiB 00:19:54.043 [2024-04-25 20:14:51.744155] Cache0: Hash size : 12 kiB 00:19:54.043 [2024-04-25 20:14:51.744162] Cache0: Cache line size: 8 kiB 00:19:54.043 [2024-04-25 20:14:51.744171] Cache0: Metadata capacity: 18 MiB 00:19:54.043 [2024-04-25 20:14:51.754447] Cache0: Policy 'always' initialized successfully 00:19:54.043 [2024-04-25 20:14:51.852852] Cache0: Done saving cache state! 00:19:54.043 [2024-04-25 20:14:51.884582] Cache0: Cache attached 00:19:54.043 [2024-04-25 20:14:51.884677] Cache0: Successfully attached 00:19:54.043 [2024-04-25 20:14:51.884947] Cache0: Inserting core Malloc1 00:19:54.043 [2024-04-25 20:14:51.884971] Cache0.Malloc1: Seqential cutoff init 00:19:54.043 [2024-04-25 20:14:51.916095] Cache0.Malloc1: Successfully added 00:19:54.043 Cache0 00:19:54.043 20:14:51 -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:54.043 20:14:51 -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:19:54.303 true 00:19:54.303 20:14:52 -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:19:54.303 20:14:52 -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 8' 00:19:54.562 true 00:19:54.562 20:14:52 -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:54.562 20:14:52 -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 8' 00:19:54.822 true 00:19:54.822 20:14:52 -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:19:55.081 [2024-04-25 20:14:52.828905] Cache0: Flushing cache 00:19:55.082 [2024-04-25 20:14:52.828939] Cache0: Flushing cache completed 00:19:55.082 [2024-04-25 20:14:52.829610] Cache0.Malloc1: Removing core 00:19:55.082 [2024-04-25 20:14:52.861736] Cache0: Core Malloc1 successfully removed 00:19:55.082 [2024-04-25 20:14:52.861790] Cache0: Stopping cache 00:19:55.082 [2024-04-25 20:14:52.955361] Cache0: Done saving cache state! 00:19:55.082 [2024-04-25 20:14:52.971471] Cache Cache0 successfully stopped 00:19:55.082 20:14:52 -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:55.341 20:14:53 -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:55.599 20:14:53 -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:19:55.599 20:14:53 -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:19:55.858 Malloc0 00:19:55.858 20:14:53 -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:19:56.117 Malloc1 00:19:56.117 20:14:53 -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 16 00:19:56.376 [2024-04-25 20:14:54.140218] Inserting cache Cache0 00:19:56.376 [2024-04-25 20:14:54.140700] Cache0: Metadata initialized 00:19:56.376 [2024-04-25 20:14:54.141122] Cache0: Successfully added 00:19:56.376 [2024-04-25 20:14:54.141131] Cache0: Cache mode : wt 00:19:56.376 [2024-04-25 20:14:54.151911] Cache0: Super block config offset : 0 kiB 00:19:56.376 [2024-04-25 20:14:54.151938] Cache0: Super block config size : 2200 B 00:19:56.376 [2024-04-25 20:14:54.151945] Cache0: Super block runtime offset : 128 kiB 00:19:56.376 [2024-04-25 20:14:54.151951] Cache0: Super block runtime size : 4 B 00:19:56.376 [2024-04-25 20:14:54.151958] Cache0: Reserved offset : 256 kiB 00:19:56.376 [2024-04-25 20:14:54.151964] Cache0: Reserved size : 128 kiB 00:19:56.376 [2024-04-25 20:14:54.151971] Cache0: Part config offset : 384 kiB 00:19:56.376 [2024-04-25 20:14:54.151977] Cache0: Part config size : 48 kiB 00:19:56.376 [2024-04-25 20:14:54.151984] Cache0: Part runtime offset : 640 kiB 00:19:56.376 [2024-04-25 20:14:54.151990] Cache0: Part runtime size : 72 kiB 00:19:56.376 [2024-04-25 20:14:54.151996] Cache0: Core config offset : 768 kiB 00:19:56.376 [2024-04-25 20:14:54.152003] Cache0: Core config size : 512 kiB 00:19:56.376 [2024-04-25 20:14:54.152009] Cache0: Core runtime offset : 1792 kiB 00:19:56.376 [2024-04-25 20:14:54.152016] Cache0: Core runtime size : 1172 kiB 00:19:56.376 [2024-04-25 20:14:54.152022] Cache0: Core UUID offset : 3072 kiB 00:19:56.376 [2024-04-25 20:14:54.152036] Cache0: Core UUID size : 16384 kiB 00:19:56.376 [2024-04-25 20:14:54.152042] Cache0: Cleaning offset : 35840 kiB 00:19:56.376 [2024-04-25 20:14:54.152049] Cache0: Cleaning size : 52 kiB 00:19:56.376 [2024-04-25 20:14:54.152056] Cache0: LRU list offset : 35968 kiB 00:19:56.376 [2024-04-25 20:14:54.152062] Cache0: LRU list size : 40 kiB 00:19:56.376 [2024-04-25 20:14:54.152068] Cache0: Collision offset : 36096 kiB 00:19:56.376 [2024-04-25 20:14:54.152075] Cache0: Collision size : 76 kiB 00:19:56.376 [2024-04-25 20:14:54.152081] Cache0: List info offset : 36224 kiB 00:19:56.376 [2024-04-25 20:14:54.152087] Cache0: List info size : 40 kiB 00:19:56.376 [2024-04-25 20:14:54.152094] Cache0: Hash offset : 36352 kiB 00:19:56.376 [2024-04-25 20:14:54.152100] Cache0: Hash size : 8 kiB 00:19:56.376 [2024-04-25 20:14:54.152107] Cache0: Cache line size: 16 kiB 00:19:56.376 [2024-04-25 20:14:54.152116] Cache0: Metadata capacity: 18 MiB 00:19:56.376 [2024-04-25 20:14:54.162412] Cache0: Policy 'always' initialized successfully 00:19:56.376 [2024-04-25 20:14:54.253108] Cache0: Done saving cache state! 00:19:56.376 [2024-04-25 20:14:54.284002] Cache0: Cache attached 00:19:56.376 [2024-04-25 20:14:54.284100] Cache0: Successfully attached 00:19:56.376 [2024-04-25 20:14:54.284371] Cache0: Inserting core Malloc1 00:19:56.376 [2024-04-25 20:14:54.284394] Cache0.Malloc1: Seqential cutoff init 00:19:56.635 [2024-04-25 20:14:54.315179] Cache0.Malloc1: Successfully added 00:19:56.635 Cache0 00:19:56.635 20:14:54 -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:19:56.635 20:14:54 -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:56.635 true 00:19:56.635 20:14:54 -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:19:56.635 20:14:54 -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 16' 00:19:56.894 true 00:19:56.894 20:14:54 -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:56.894 20:14:54 -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 16' 00:19:57.178 true 00:19:57.178 20:14:55 -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:19:57.438 [2024-04-25 20:14:55.251846] Cache0: Flushing cache 00:19:57.438 [2024-04-25 20:14:55.251884] Cache0: Flushing cache completed 00:19:57.438 [2024-04-25 20:14:55.252366] Cache0.Malloc1: Removing core 00:19:57.438 [2024-04-25 20:14:55.285483] Cache0: Core Malloc1 successfully removed 00:19:57.438 [2024-04-25 20:14:55.285540] Cache0: Stopping cache 00:19:57.697 [2024-04-25 20:14:55.373720] Cache0: Done saving cache state! 00:19:57.697 [2024-04-25 20:14:55.392902] Cache Cache0 successfully stopped 00:19:57.697 20:14:55 -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:57.957 20:14:55 -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:58.216 20:14:55 -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:19:58.217 20:14:55 -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:19:58.217 Malloc0 00:19:58.217 20:14:56 -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:19:58.476 Malloc1 00:19:58.476 20:14:56 -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 32 00:19:58.736 [2024-04-25 20:14:56.605407] Inserting cache Cache0 00:19:58.736 [2024-04-25 20:14:56.605828] Cache0: Metadata initialized 00:19:58.736 [2024-04-25 20:14:56.606251] Cache0: Successfully added 00:19:58.736 [2024-04-25 20:14:56.606259] Cache0: Cache mode : wt 00:19:58.736 [2024-04-25 20:14:56.616075] Cache0: Super block config offset : 0 kiB 00:19:58.736 [2024-04-25 20:14:56.616097] Cache0: Super block config size : 2200 B 00:19:58.736 [2024-04-25 20:14:56.616104] Cache0: Super block runtime offset : 128 kiB 00:19:58.736 [2024-04-25 20:14:56.616111] Cache0: Super block runtime size : 4 B 00:19:58.736 [2024-04-25 20:14:56.616117] Cache0: Reserved offset : 256 kiB 00:19:58.737 [2024-04-25 20:14:56.616131] Cache0: Reserved size : 128 kiB 00:19:58.737 [2024-04-25 20:14:56.616138] Cache0: Part config offset : 384 kiB 00:19:58.737 [2024-04-25 20:14:56.616144] Cache0: Part config size : 48 kiB 00:19:58.737 [2024-04-25 20:14:56.616151] Cache0: Part runtime offset : 640 kiB 00:19:58.737 [2024-04-25 20:14:56.616157] Cache0: Part runtime size : 72 kiB 00:19:58.737 [2024-04-25 20:14:56.616164] Cache0: Core config offset : 768 kiB 00:19:58.737 [2024-04-25 20:14:56.616170] Cache0: Core config size : 512 kiB 00:19:58.737 [2024-04-25 20:14:56.616176] Cache0: Core runtime offset : 1792 kiB 00:19:58.737 [2024-04-25 20:14:56.616183] Cache0: Core runtime size : 1172 kiB 00:19:58.737 [2024-04-25 20:14:56.616189] Cache0: Core UUID offset : 3072 kiB 00:19:58.737 [2024-04-25 20:14:56.616196] Cache0: Core UUID size : 16384 kiB 00:19:58.737 [2024-04-25 20:14:56.616202] Cache0: Cleaning offset : 35840 kiB 00:19:58.737 [2024-04-25 20:14:56.616209] Cache0: Cleaning size : 28 kiB 00:19:58.737 [2024-04-25 20:14:56.616215] Cache0: LRU list offset : 35968 kiB 00:19:58.737 [2024-04-25 20:14:56.616222] Cache0: LRU list size : 20 kiB 00:19:58.737 [2024-04-25 20:14:56.616228] Cache0: Collision offset : 36096 kiB 00:19:58.737 [2024-04-25 20:14:56.616234] Cache0: Collision size : 56 kiB 00:19:58.737 [2024-04-25 20:14:56.616241] Cache0: List info offset : 36224 kiB 00:19:58.737 [2024-04-25 20:14:56.616247] Cache0: List info size : 20 kiB 00:19:58.737 [2024-04-25 20:14:56.616254] Cache0: Hash offset : 36352 kiB 00:19:58.737 [2024-04-25 20:14:56.616260] Cache0: Hash size : 4 kiB 00:19:58.737 [2024-04-25 20:14:56.616267] Cache0: Cache line size: 32 kiB 00:19:58.737 [2024-04-25 20:14:56.616275] Cache0: Metadata capacity: 18 MiB 00:19:58.737 [2024-04-25 20:14:56.625600] Cache0: Policy 'always' initialized successfully 00:19:58.996 [2024-04-25 20:14:56.711703] Cache0: Done saving cache state! 00:19:58.996 [2024-04-25 20:14:56.742650] Cache0: Cache attached 00:19:58.996 [2024-04-25 20:14:56.742747] Cache0: Successfully attached 00:19:58.996 [2024-04-25 20:14:56.743015] Cache0: Inserting core Malloc1 00:19:58.996 [2024-04-25 20:14:56.743038] Cache0.Malloc1: Seqential cutoff init 00:19:58.996 [2024-04-25 20:14:56.773630] Cache0.Malloc1: Successfully added 00:19:58.996 Cache0 00:19:58.996 20:14:56 -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:19:58.996 20:14:56 -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:19:59.255 true 00:19:59.255 20:14:57 -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:19:59.255 20:14:57 -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 32' 00:19:59.514 true 00:19:59.514 20:14:57 -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:59.514 20:14:57 -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 32' 00:19:59.774 true 00:19:59.774 20:14:57 -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:19:59.774 [2024-04-25 20:14:57.698403] Cache0: Flushing cache 00:19:59.774 [2024-04-25 20:14:57.698438] Cache0: Flushing cache completed 00:19:59.774 [2024-04-25 20:14:57.698830] Cache0.Malloc1: Removing core 00:20:00.033 [2024-04-25 20:14:57.732346] Cache0: Core Malloc1 successfully removed 00:20:00.033 [2024-04-25 20:14:57.732401] Cache0: Stopping cache 00:20:00.033 [2024-04-25 20:14:57.817233] Cache0: Done saving cache state! 00:20:00.033 [2024-04-25 20:14:57.837392] Cache Cache0 successfully stopped 00:20:00.034 20:14:57 -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:00.293 20:14:58 -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:00.553 20:14:58 -- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}" 00:20:00.553 20:14:58 -- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:20:00.812 Malloc0 00:20:00.812 20:14:58 -- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:20:01.071 Malloc1 00:20:01.071 20:14:58 -- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 64 00:20:01.331 [2024-04-25 20:14:59.016305] Inserting cache Cache0 00:20:01.331 [2024-04-25 20:14:59.016788] Cache0: Metadata initialized 00:20:01.331 [2024-04-25 20:14:59.017213] Cache0: Successfully added 00:20:01.331 [2024-04-25 20:14:59.017221] Cache0: Cache mode : wt 00:20:01.331 [2024-04-25 20:14:59.027998] Cache0: Super block config offset : 0 kiB 00:20:01.331 [2024-04-25 20:14:59.028022] Cache0: Super block config size : 2200 B 00:20:01.331 [2024-04-25 20:14:59.028029] Cache0: Super block runtime offset : 128 kiB 00:20:01.331 [2024-04-25 20:14:59.028036] Cache0: Super block runtime size : 4 B 00:20:01.331 [2024-04-25 20:14:59.028042] Cache0: Reserved offset : 256 kiB 00:20:01.331 [2024-04-25 20:14:59.028049] Cache0: Reserved size : 128 kiB 00:20:01.331 [2024-04-25 20:14:59.028055] Cache0: Part config offset : 384 kiB 00:20:01.331 [2024-04-25 20:14:59.028062] Cache0: Part config size : 48 kiB 00:20:01.331 [2024-04-25 20:14:59.028068] Cache0: Part runtime offset : 640 kiB 00:20:01.331 [2024-04-25 20:14:59.028075] Cache0: Part runtime size : 72 kiB 00:20:01.331 [2024-04-25 20:14:59.028081] Cache0: Core config offset : 768 kiB 00:20:01.331 [2024-04-25 20:14:59.028087] Cache0: Core config size : 512 kiB 00:20:01.331 [2024-04-25 20:14:59.028094] Cache0: Core runtime offset : 1792 kiB 00:20:01.331 [2024-04-25 20:14:59.028100] Cache0: Core runtime size : 1172 kiB 00:20:01.331 [2024-04-25 20:14:59.028107] Cache0: Core UUID offset : 3072 kiB 00:20:01.331 [2024-04-25 20:14:59.028113] Cache0: Core UUID size : 16384 kiB 00:20:01.331 [2024-04-25 20:14:59.028120] Cache0: Cleaning offset : 35840 kiB 00:20:01.331 [2024-04-25 20:14:59.028126] Cache0: Cleaning size : 16 kiB 00:20:01.331 [2024-04-25 20:14:59.028133] Cache0: LRU list offset : 35968 kiB 00:20:01.331 [2024-04-25 20:14:59.028139] Cache0: LRU list size : 12 kiB 00:20:01.331 [2024-04-25 20:14:59.028146] Cache0: Collision offset : 36096 kiB 00:20:01.331 [2024-04-25 20:14:59.028152] Cache0: Collision size : 44 kiB 00:20:01.331 [2024-04-25 20:14:59.028159] Cache0: List info offset : 36224 kiB 00:20:01.331 [2024-04-25 20:14:59.028165] Cache0: List info size : 12 kiB 00:20:01.331 [2024-04-25 20:14:59.028171] Cache0: Hash offset : 36352 kiB 00:20:01.331 [2024-04-25 20:14:59.028178] Cache0: Hash size : 4 kiB 00:20:01.331 [2024-04-25 20:14:59.028185] Cache0: Cache line size: 64 kiB 00:20:01.331 [2024-04-25 20:14:59.028193] Cache0: Metadata capacity: 18 MiB 00:20:01.331 [2024-04-25 20:14:59.038448] Cache0: Policy 'always' initialized successfully 00:20:01.331 [2024-04-25 20:14:59.123749] Cache0: Done saving cache state! 00:20:01.331 [2024-04-25 20:14:59.155435] Cache0: Cache attached 00:20:01.331 [2024-04-25 20:14:59.155534] Cache0: Successfully attached 00:20:01.331 [2024-04-25 20:14:59.155820] Cache0: Inserting core Malloc1 00:20:01.331 [2024-04-25 20:14:59.155846] Cache0.Malloc1: Seqential cutoff init 00:20:01.331 [2024-04-25 20:14:59.187230] Cache0.Malloc1: Successfully added 00:20:01.331 Cache0 00:20:01.331 20:14:59 -- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:20:01.331 20:14:59 -- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:20:01.590 true 00:20:01.590 20:14:59 -- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:01.590 20:14:59 -- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 64' 00:20:01.850 true 00:20:01.850 20:14:59 -- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:01.850 20:14:59 -- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 64' 00:20:02.109 true 00:20:02.109 20:14:59 -- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0 00:20:02.368 [2024-04-25 20:15:00.119912] Cache0: Flushing cache 00:20:02.368 [2024-04-25 20:15:00.119950] Cache0: Flushing cache completed 00:20:02.368 [2024-04-25 20:15:00.120314] Cache0.Malloc1: Removing core 00:20:02.368 [2024-04-25 20:15:00.152437] Cache0: Core Malloc1 successfully removed 00:20:02.368 [2024-04-25 20:15:00.152492] Cache0: Stopping cache 00:20:02.368 [2024-04-25 20:15:00.235613] Cache0: Done saving cache state! 00:20:02.368 [2024-04-25 20:15:00.253872] Cache Cache0 successfully stopped 00:20:02.368 20:15:00 -- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:02.628 20:15:00 -- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:02.888 20:15:00 -- management/configuration-change.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0 00:20:03.147 Malloc0 00:20:03.147 20:15:01 -- management/configuration-change.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1 00:20:03.406 Malloc1 00:20:03.406 20:15:01 -- management/configuration-change.sh@42 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 00:20:03.665 [2024-04-25 20:15:01.436890] Inserting cache Cache0 00:20:03.665 [2024-04-25 20:15:01.437349] Cache0: Metadata initialized 00:20:03.665 [2024-04-25 20:15:01.437778] Cache0: Successfully added 00:20:03.665 [2024-04-25 20:15:01.437787] Cache0: Cache mode : wt 00:20:03.665 [2024-04-25 20:15:01.448391] Cache0: Super block config offset : 0 kiB 00:20:03.665 [2024-04-25 20:15:01.448414] Cache0: Super block config size : 2200 B 00:20:03.665 [2024-04-25 20:15:01.448421] Cache0: Super block runtime offset : 128 kiB 00:20:03.665 [2024-04-25 20:15:01.448427] Cache0: Super block runtime size : 4 B 00:20:03.665 [2024-04-25 20:15:01.448434] Cache0: Reserved offset : 256 kiB 00:20:03.665 [2024-04-25 20:15:01.448441] Cache0: Reserved size : 128 kiB 00:20:03.665 [2024-04-25 20:15:01.448447] Cache0: Part config offset : 384 kiB 00:20:03.665 [2024-04-25 20:15:01.448453] Cache0: Part config size : 48 kiB 00:20:03.665 [2024-04-25 20:15:01.448460] Cache0: Part runtime offset : 640 kiB 00:20:03.665 [2024-04-25 20:15:01.448466] Cache0: Part runtime size : 72 kiB 00:20:03.665 [2024-04-25 20:15:01.448473] Cache0: Core config offset : 768 kiB 00:20:03.665 [2024-04-25 20:15:01.448479] Cache0: Core config size : 512 kiB 00:20:03.665 [2024-04-25 20:15:01.448486] Cache0: Core runtime offset : 1792 kiB 00:20:03.665 [2024-04-25 20:15:01.448492] Cache0: Core runtime size : 1172 kiB 00:20:03.665 [2024-04-25 20:15:01.448498] Cache0: Core UUID offset : 3072 kiB 00:20:03.665 [2024-04-25 20:15:01.448505] Cache0: Core UUID size : 16384 kiB 00:20:03.665 [2024-04-25 20:15:01.448511] Cache0: Cleaning offset : 35840 kiB 00:20:03.665 [2024-04-25 20:15:01.448518] Cache0: Cleaning size : 196 kiB 00:20:03.665 [2024-04-25 20:15:01.448524] Cache0: LRU list offset : 36096 kiB 00:20:03.665 [2024-04-25 20:15:01.448531] Cache0: LRU list size : 148 kiB 00:20:03.665 [2024-04-25 20:15:01.448537] Cache0: Collision offset : 36352 kiB 00:20:03.665 [2024-04-25 20:15:01.448543] Cache0: Collision size : 196 kiB 00:20:03.665 [2024-04-25 20:15:01.448550] Cache0: List info offset : 36608 kiB 00:20:03.665 [2024-04-25 20:15:01.448556] Cache0: List info size : 148 kiB 00:20:03.665 [2024-04-25 20:15:01.448563] Cache0: Hash offset : 36864 kiB 00:20:03.665 [2024-04-25 20:15:01.448569] Cache0: Hash size : 20 kiB 00:20:03.665 [2024-04-25 20:15:01.448576] Cache0: Cache line size: 4 kiB 00:20:03.665 [2024-04-25 20:15:01.448585] Cache0: Metadata capacity: 18 MiB 00:20:03.665 [2024-04-25 20:15:01.458703] Cache0: Policy 'always' initialized successfully 00:20:03.665 [2024-04-25 20:15:01.572834] Cache0: Done saving cache state! 00:20:03.925 [2024-04-25 20:15:01.605126] Cache0: Cache attached 00:20:03.925 [2024-04-25 20:15:01.605222] Cache0: Successfully attached 00:20:03.925 [2024-04-25 20:15:01.605502] Cache0: Inserting core Malloc1 00:20:03.925 [2024-04-25 20:15:01.605526] Cache0.Malloc1: Seqential cutoff init 00:20:03.925 [2024-04-25 20:15:01.638125] Cache0.Malloc1: Successfully added 00:20:03.925 Cache0 00:20:03.925 20:15:01 -- management/configuration-change.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs 00:20:03.925 20:15:01 -- management/configuration-change.sh@44 -- # jq -e '.[0] | .started and .cache.attached and .core.attached' 00:20:04.183 true 00:20:04.183 20:15:01 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:04.183 20:15:01 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wt 00:20:04.183 [2024-04-25 20:15:02.113027] Cache0: Cache mode 'Write Through' is already set 00:20:04.183 wt 00:20:04.441 20:15:02 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:04.441 20:15:02 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wt"' 00:20:04.441 true 00:20:04.699 20:15:02 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:04.699 20:15:02 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wt"' 00:20:04.699 true 00:20:04.699 20:15:02 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:04.699 20:15:02 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wb 00:20:04.957 [2024-04-25 20:15:02.815039] Cache0: Changing cache mode from 'Write Through' to 'Write Back' successful 00:20:04.957 wb 00:20:04.957 20:15:02 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:04.957 20:15:02 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wb"' 00:20:05.217 true 00:20:05.217 20:15:03 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:05.217 20:15:03 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wb"' 00:20:05.475 true 00:20:05.475 20:15:03 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:05.475 20:15:03 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 pt 00:20:05.732 [2024-04-25 20:15:03.493076] Cache0: Changing cache mode from 'Write Back' to 'Pass Through' successful 00:20:05.732 pt 00:20:05.732 20:15:03 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:05.732 20:15:03 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "pt"' 00:20:05.990 true 00:20:05.990 20:15:03 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:05.990 20:15:03 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "pt"' 00:20:06.248 true 00:20:06.248 20:15:03 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:06.248 20:15:03 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wa 00:20:06.248 [2024-04-25 20:15:04.182967] Cache0: Changing cache mode from 'Pass Through' to 'Write Around' successful 00:20:06.507 wa 00:20:06.507 20:15:04 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:06.507 20:15:04 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wa"' 00:20:06.507 true 00:20:06.507 20:15:04 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.507 20:15:04 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wa"' 00:20:06.766 true 00:20:06.766 20:15:04 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:06.766 20:15:04 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wi 00:20:07.025 [2024-04-25 20:15:04.897005] Cache0: Changing cache mode from 'Write Around' to 'Write Invalidate' successful 00:20:07.025 wi 00:20:07.025 20:15:04 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:07.025 20:15:04 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wi"' 00:20:07.284 true 00:20:07.284 20:15:05 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wi"' 00:20:07.284 20:15:05 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:07.543 true 00:20:07.543 20:15:05 -- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}" 00:20:07.543 20:15:05 -- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wo 00:20:07.802 [2024-04-25 20:15:05.538819] Cache0: Changing cache mode from 'Write Invalidate' to 'Write Only' successful 00:20:07.802 wo 00:20:07.802 20:15:05 -- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wo"' 00:20:07.802 20:15:05 -- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0 00:20:08.061 true 00:20:08.061 20:15:05 -- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:08.061 20:15:05 -- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wo"' 00:20:08.320 true 00:20:08.320 20:15:06 -- management/configuration-change.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p always -t 64 00:20:08.320 [2024-04-25 20:15:06.224787] Cache0.Malloc1: Changing sequential cutoff policy from full to always 00:20:08.320 [2024-04-25 20:15:06.224856] Cache0.Malloc1: Changing sequential cutoff threshold from 1024 to 65536 bytes successful 00:20:08.320 20:15:06 -- management/configuration-change.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p never -t 16 00:20:08.578 [2024-04-25 20:15:06.453436] Cache0.Malloc1: Changing sequential cutoff policy from always to never 00:20:08.578 [2024-04-25 20:15:06.453498] Cache0.Malloc1: Changing sequential cutoff threshold from 65536 to 16384 bytes successful 00:20:08.578 20:15:06 -- management/configuration-change.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:20:08.578 20:15:06 -- management/configuration-change.sh@63 -- # killprocess 2170461 00:20:08.578 20:15:06 -- common/autotest_common.sh@926 -- # '[' -z 2170461 ']' 00:20:08.578 20:15:06 -- common/autotest_common.sh@930 -- # kill -0 2170461 00:20:08.578 20:15:06 -- common/autotest_common.sh@931 -- # uname 00:20:08.578 20:15:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:08.578 20:15:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170461 00:20:08.871 20:15:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:08.871 20:15:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:08.871 20:15:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170461' 00:20:08.871 killing process with pid 2170461 00:20:08.871 20:15:06 -- common/autotest_common.sh@945 -- # kill 2170461 00:20:08.871 20:15:06 -- common/autotest_common.sh@950 -- # wait 2170461 00:20:08.871 [2024-04-25 20:15:06.679381] Cache0: Flushing cache 00:20:08.871 [2024-04-25 20:15:06.679430] Cache0: Flushing cache completed 00:20:08.871 [2024-04-25 20:15:06.679483] Cache0: Stopping cache 00:20:08.871 [2024-04-25 20:15:06.787629] Cache0: Done saving cache state! 00:20:09.130 [2024-04-25 20:15:06.805638] Cache Cache0 successfully stopped 00:20:09.389 00:20:09.389 real 0m19.708s 00:20:09.389 user 0m33.123s 00:20:09.389 sys 0m3.433s 00:20:09.389 20:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.389 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.389 ************************************ 00:20:09.389 END TEST ocf_configuration_change 00:20:09.389 ************************************ 00:20:09.389 00:20:09.389 real 1m42.893s 00:20:09.389 user 2m39.724s 00:20:09.389 sys 0m17.645s 00:20:09.389 20:15:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.389 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.389 ************************************ 00:20:09.389 END TEST ocf 00:20:09.389 ************************************ 00:20:09.647 20:15:07 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:09.647 20:15:07 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:09.647 20:15:07 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:09.647 20:15:07 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:09.647 20:15:07 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:09.647 20:15:07 -- spdk/autotest.sh@366 -- # [[ 1 -eq 1 ]] 00:20:09.647 20:15:07 -- spdk/autotest.sh@367 -- # run_test scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh 00:20:09.647 20:15:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:09.647 20:15:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:09.647 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:20:09.647 ************************************ 00:20:09.647 START TEST scheduler 00:20:09.647 ************************************ 00:20:09.647 20:15:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh 00:20:09.647 * Looking for test storage... 00:20:09.647 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:20:09.647 20:15:07 -- scheduler/scheduler.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/isolate_cores.sh 00:20:09.647 20:15:07 -- scheduler/isolate_cores.sh@6 -- # xtrace_disable 00:20:09.647 20:15:07 -- common/autotest_common.sh@10 -- # set +x 00:20:11.027 Moved 18 processes, failed 0 00:20:11.027 Moved 2 processes, failed 0 00:20:11.027 Moved 2 processes, failed 0 00:20:16.311 Moved 96 processes, failed 697 00:20:16.570 Moved 97 processes, failed 0 00:20:16.570 20:15:14 -- scheduler/scheduler.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh 00:20:19.862 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:19.862 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:20:19.862 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:20:19.862 20:15:17 -- scheduler/scheduler.sh@14 -- # run_test idle /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh 00:20:19.862 20:15:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:19.862 20:15:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:19.862 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:20:19.862 ************************************ 00:20:19.862 START TEST idle 00:20:19.862 ************************************ 00:20:19.862 20:15:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh 00:20:20.122 * Looking for test storage... 00:20:20.122 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:20:20.122 20:15:17 -- scheduler/idle.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:20:20.122 20:15:17 -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:20:20.122 20:15:17 -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:20:20.122 20:15:17 -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:20:20.122 20:15:17 -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:20:20.122 20:15:17 -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:20:20.122 20:15:17 -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@256 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@257 -- # check_cgroup 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@10 -- # echo 2 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@257 -- # cgroup_version=2 00:20:20.122 20:15:17 -- scheduler/idle.sh@13 -- # trap 'killprocess "$spdk_pid"' EXIT 00:20:20.122 20:15:17 -- scheduler/idle.sh@71 -- # idle 00:20:20.122 20:15:17 -- scheduler/idle.sh@36 -- # local reactor_framework 00:20:20.122 20:15:17 -- scheduler/idle.sh@37 -- # local reactors thread 00:20:20.122 20:15:17 -- scheduler/idle.sh@38 -- # local thread_cpumask 00:20:20.122 20:15:17 -- scheduler/idle.sh@39 -- # local threads 00:20:20.122 20:15:17 -- scheduler/idle.sh@41 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 00:20:20.122 20:15:17 -- scheduler/common.sh@398 -- # [[ -e /proc//status ]] 00:20:20.122 20:15:17 -- scheduler/common.sh@402 -- # spdk_pid=2178512 00:20:20.122 20:15:17 -- scheduler/common.sh@404 -- # waitforlisten 2178512 00:20:20.122 20:15:17 -- scheduler/common.sh@401 -- # exec_in_cgroup /cpuset/spdk /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:20:20.122 20:15:17 -- common/autotest_common.sh@819 -- # '[' -z 2178512 ']' 00:20:20.122 20:15:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@134 -- # local cgroup=/cpuset/spdk 00:20:20.122 20:15:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@135 -- # local proc_interface=cgroup.procs 00:20:20.122 20:15:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.122 20:15:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@137 -- # shift 00:20:20.122 20:15:17 -- common/autotest_common.sh@10 -- # set +x 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@139 -- # (( cgroup_version == 2 )) 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@139 -- # is_cgroup_threaded /cpuset/spdk 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@49 -- # [[ -e /sys/fs/cgroup//cpuset/spdk/cgroup.type ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@50 -- # [[ threaded == threaded ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@140 -- # proc_interface=cgroup.threads 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@142 -- # set_cgroup_attr /cpuset/spdk cgroup.threads 2178512 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@101 -- # local cgroup=/cpuset/spdk 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@102 -- # local attr=cgroup.threads 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@103 -- # local val=2178512 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@105 -- # [[ -e /sys/fs/cgroup//cpuset/spdk/cgroup.threads ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@107 -- # [[ -n 2178512 ]] 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@108 -- # echo 2178512 00:20:20.122 20:15:17 -- scheduler/cgroups.sh@143 -- # exec /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:20:20.122 [2024-04-25 20:15:17.922886] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:20.122 [2024-04-25 20:15:17.922964] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178512 ] 00:20:20.122 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.122 [2024-04-25 20:15:18.016407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8 00:20:20.382 [2024-04-25 20:15:18.118841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:20.382 [2024-04-25 20:15:18.119077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.382 [2024-04-25 20:15:18.119180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.382 [2024-04-25 20:15:18.119280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:20.382 [2024-04-25 20:15:18.119397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40 00:20:20.382 [2024-04-25 20:15:18.119321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38 00:20:20.382 [2024-04-25 20:15:18.119358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39 00:20:20.382 [2024-04-25 20:15:18.119399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.382 [2024-04-25 20:15:18.119299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37 00:20:20.951 20:15:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:20.951 20:15:18 -- common/autotest_common.sh@852 -- # return 0 00:20:20.951 20:15:18 -- scheduler/common.sh@405 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic 00:20:21.210 POWER: Env isn't set yet! 00:20:21.210 POWER: Attempting to initialise ACPI cpufreq power management... 00:20:21.210 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:21.210 POWER: Cannot set governor of lcore 1 to userspace 00:20:21.210 POWER: Attempting to initialise PSTAT power management... 00:20:21.210 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 1 power management 00:20:21.210 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 2 power management 00:20:21.210 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 3 power management 00:20:21.210 POWER: Power management governor of lcore 4 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 4 power management 00:20:21.210 POWER: Power management governor of lcore 37 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 37 power management 00:20:21.210 POWER: Power management governor of lcore 38 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 38 power management 00:20:21.210 POWER: Power management governor of lcore 39 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 39 power management 00:20:21.210 POWER: Power management governor of lcore 40 has been set to 'performance' successfully 00:20:21.210 POWER: Initialized successfully for lcore 40 power management 00:20:21.210 20:15:19 -- scheduler/common.sh@406 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:21.779 [2024-04-25 20:15:19.546521] 'OCF_Core' volume operations registered 00:20:21.779 [2024-04-25 20:15:19.550037] 'OCF_Cache' volume operations registered 00:20:21.779 [2024-04-25 20:15:19.553974] 'OCF Composite' volume operations registered 00:20:21.779 [2024-04-25 20:15:19.557420] 'SPDK_block_device' volume operations registered 00:20:22.038 20:15:19 -- scheduler/idle.sh@48 -- # get_thread_stats_current 00:20:22.038 20:15:19 -- scheduler/common.sh@411 -- # xtrace_disable 00:20:22.038 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:20:23.945 20:15:21 -- scheduler/idle.sh@50 -- # xtrace_disable 00:20:23.945 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:20:24.204 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:20:24.204 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e 00:20:24.204 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e 00:20:24.204 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e 00:20:24.204 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e 00:20:24.463 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e 00:20:24.463 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e 00:20:24.463 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e 00:20:24.463 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e 00:20:24.463 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:20:24.722 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:20:24.981 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:20:27.514 [load: 2%, idle: 428338532, busy: 11622164] app_thread is idle 00:20:27.514 [load: 0%, idle: 391978398, busy: 221160] nvmf_tgt_poll_group_0 is idle 00:20:27.514 [load: 0%, idle: 390879626, busy: 219400] nvmf_tgt_poll_group_1 is idle 00:20:27.514 [load: 0%, idle: 390866866, busy: 219460] nvmf_tgt_poll_group_2 is idle 00:20:27.514 [load: 0%, idle: 390317718, busy: 219442] nvmf_tgt_poll_group_3 is idle 00:20:27.514 [load: 0%, idle: 391270718, busy: 219460] nvmf_tgt_poll_group_4 is idle 00:20:27.514 [load: 0%, idle: 391183694, busy: 219598] nvmf_tgt_poll_group_5 is idle 00:20:27.514 [load: 0%, idle: 391072372, busy: 219838] nvmf_tgt_poll_group_6 is idle 00:20:27.514 [load: 0%, idle: 390306670, busy: 219188] nvmf_tgt_poll_group_7 is idle 00:20:27.514 [load: 0%, idle: 397618018, busy: 220632] iscsi_poll_group_1 is idle 00:20:27.514 [load: 0%, idle: 399115376, busy: 221270] iscsi_poll_group_2 is idle 00:20:27.514 [load: 0%, idle: 396444518, busy: 233280] iscsi_poll_group_3 is idle 00:20:27.514 [load: 0%, idle: 396581796, busy: 220622] iscsi_poll_group_4 is idle 00:20:27.514 [load: 0%, idle: 397637434, busy: 226792] iscsi_poll_group_37 is idle 00:20:27.514 [load: 0%, idle: 400572330, busy: 226788] iscsi_poll_group_38 is idle 00:20:27.514 [load: 0%, idle: 396001820, busy: 226566] iscsi_poll_group_39 is idle 00:20:27.514 [load: 0%, idle: 396152316, busy: 227232] iscsi_poll_group_40 is idle 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:20:27.514 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:20:27.773 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:20:27.773 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:20:27.773 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:20:27.773 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:20:27.773 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:20:30.308 [load: 2%, idle: 430402758, busy: 11521838] app_thread is idle 00:20:30.308 [load: 0%, idle: 394103334, busy: 220514] nvmf_tgt_poll_group_0 is idle 00:20:30.308 [load: 0%, idle: 392669516, busy: 219824] nvmf_tgt_poll_group_1 is idle 00:20:30.308 [load: 0%, idle: 392272568, busy: 219312] nvmf_tgt_poll_group_2 is idle 00:20:30.308 [load: 0%, idle: 392626718, busy: 230794] nvmf_tgt_poll_group_3 is idle 00:20:30.308 [load: 0%, idle: 392188908, busy: 219344] nvmf_tgt_poll_group_4 is idle 00:20:30.308 [load: 0%, idle: 392469224, busy: 219408] nvmf_tgt_poll_group_5 is idle 00:20:30.308 [load: 0%, idle: 392260322, busy: 220186] nvmf_tgt_poll_group_6 is idle 00:20:30.308 [load: 0%, idle: 392704208, busy: 219270] nvmf_tgt_poll_group_7 is idle 00:20:30.308 [load: 0%, idle: 399041452, busy: 237220] iscsi_poll_group_1 is idle 00:20:30.308 [load: 0%, idle: 400867746, busy: 224812] iscsi_poll_group_2 is idle 00:20:30.308 [load: 0%, idle: 398710662, busy: 225052] iscsi_poll_group_3 is idle 00:20:30.308 [load: 0%, idle: 398358406, busy: 225330] iscsi_poll_group_4 is idle 00:20:30.308 [load: 0%, idle: 399277764, busy: 231528] iscsi_poll_group_37 is idle 00:20:30.308 [load: 0%, idle: 402102296, busy: 231570] iscsi_poll_group_38 is idle 00:20:30.308 [load: 0%, idle: 397622292, busy: 241874] iscsi_poll_group_39 is idle 00:20:30.308 [load: 0%, idle: 398217872, busy: 231976] iscsi_poll_group_40 is idle 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:20:30.308 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:20:30.567 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:20:30.567 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:20:30.567 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:20:30.567 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:20:30.567 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:20:33.099 [load: 2%, idle: 411416444, busy: 11237700] app_thread is idle 00:20:33.099 [load: 0%, idle: 376685222, busy: 220808] nvmf_tgt_poll_group_0 is idle 00:20:33.099 [load: 0%, idle: 376129134, busy: 219824] nvmf_tgt_poll_group_1 is idle 00:20:33.099 [load: 0%, idle: 375336554, busy: 219540] nvmf_tgt_poll_group_2 is idle 00:20:33.099 [load: 0%, idle: 375583930, busy: 219132] nvmf_tgt_poll_group_3 is idle 00:20:33.099 [load: 0%, idle: 375174490, busy: 219854] nvmf_tgt_poll_group_4 is idle 00:20:33.099 [load: 0%, idle: 374972266, busy: 219582] nvmf_tgt_poll_group_5 is idle 00:20:33.099 [load: 0%, idle: 375146186, busy: 219434] nvmf_tgt_poll_group_6 is idle 00:20:33.099 [load: 0%, idle: 375233480, busy: 219306] nvmf_tgt_poll_group_7 is idle 00:20:33.099 [load: 0%, idle: 381375618, busy: 226818] iscsi_poll_group_1 is idle 00:20:33.099 [load: 0%, idle: 383339660, busy: 234980] iscsi_poll_group_2 is idle 00:20:33.099 [load: 0%, idle: 381536248, busy: 224646] iscsi_poll_group_3 is idle 00:20:33.099 [load: 0%, idle: 380815764, busy: 225344] iscsi_poll_group_4 is idle 00:20:33.099 [load: 0%, idle: 381773180, busy: 231572] iscsi_poll_group_37 is idle 00:20:33.099 [load: 0%, idle: 384790252, busy: 231350] iscsi_poll_group_38 is idle 00:20:33.099 [load: 0%, idle: 380535342, busy: 231354] iscsi_poll_group_39 is idle 00:20:33.099 [load: 0%, idle: 380226114, busy: 231712] iscsi_poll_group_40 is idle 00:20:33.099 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:20:33.099 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e 00:20:33.099 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e 00:20:33.099 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e 00:20:33.099 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:20:33.358 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:20:33.620 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:20:33.620 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:20:33.620 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:20:33.620 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:20:33.620 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:20:36.199 [load: 2%, idle: 444660522, busy: 11973374] app_thread is idle 00:20:36.199 [load: 0%, idle: 407670712, busy: 220458] nvmf_tgt_poll_group_0 is idle 00:20:36.199 [load: 0%, idle: 406141692, busy: 219942] nvmf_tgt_poll_group_1 is idle 00:20:36.199 [load: 0%, idle: 405497546, busy: 219626] nvmf_tgt_poll_group_2 is idle 00:20:36.199 [load: 0%, idle: 405566540, busy: 219784] nvmf_tgt_poll_group_3 is idle 00:20:36.199 [load: 0%, idle: 405736646, busy: 229828] nvmf_tgt_poll_group_4 is idle 00:20:36.199 [load: 0%, idle: 405758468, busy: 220292] nvmf_tgt_poll_group_5 is idle 00:20:36.199 [load: 0%, idle: 406222330, busy: 219400] nvmf_tgt_poll_group_6 is idle 00:20:36.199 [load: 0%, idle: 405586764, busy: 219532] nvmf_tgt_poll_group_7 is idle 00:20:36.199 [load: 0%, idle: 412629330, busy: 225510] iscsi_poll_group_1 is idle 00:20:36.199 [load: 0%, idle: 414101852, busy: 224930] iscsi_poll_group_2 is idle 00:20:36.199 [load: 0%, idle: 411921546, busy: 224306] iscsi_poll_group_3 is idle 00:20:36.199 [load: 0%, idle: 411473218, busy: 224998] iscsi_poll_group_4 is idle 00:20:36.199 [load: 0%, idle: 413200598, busy: 231450] iscsi_poll_group_37 is idle 00:20:36.199 [load: 0%, idle: 416453748, busy: 231670] iscsi_poll_group_38 is idle 00:20:36.199 [load: 0%, idle: 411191896, busy: 231512] iscsi_poll_group_39 is idle 00:20:36.199 [load: 0%, idle: 411575878, busy: 245332] iscsi_poll_group_40 is idle 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e 00:20:36.199 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e 00:20:36.200 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e 00:20:36.200 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e 00:20:36.200 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000 00:20:36.460 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000 00:20:36.720 SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000 00:20:39.255 [load: 2%, idle: 426027190, busy: 11584996] app_thread is idle 00:20:39.255 [load: 0%, idle: 390041364, busy: 219502] nvmf_tgt_poll_group_0 is idle 00:20:39.255 [load: 0%, idle: 388915944, busy: 219500] nvmf_tgt_poll_group_1 is idle 00:20:39.255 [load: 0%, idle: 388496134, busy: 219612] nvmf_tgt_poll_group_2 is idle 00:20:39.255 [load: 0%, idle: 388466414, busy: 219652] nvmf_tgt_poll_group_3 is idle 00:20:39.255 [load: 0%, idle: 388812382, busy: 219496] nvmf_tgt_poll_group_4 is idle 00:20:39.255 [load: 0%, idle: 388345058, busy: 219656] nvmf_tgt_poll_group_5 is idle 00:20:39.255 [load: 0%, idle: 388508658, busy: 219430] nvmf_tgt_poll_group_6 is idle 00:20:39.255 [load: 0%, idle: 389142034, busy: 219450] nvmf_tgt_poll_group_7 is idle 00:20:39.255 [load: 0%, idle: 395349156, busy: 237764] iscsi_poll_group_1 is idle 00:20:39.255 [load: 0%, idle: 396780230, busy: 224824] iscsi_poll_group_2 is idle 00:20:39.255 [load: 0%, idle: 394607330, busy: 225040] iscsi_poll_group_3 is idle 00:20:39.255 [load: 0%, idle: 394150930, busy: 225712] iscsi_poll_group_4 is idle 00:20:39.255 [load: 0%, idle: 395351888, busy: 231444] iscsi_poll_group_37 is idle 00:20:39.255 [load: 0%, idle: 398561290, busy: 231312] iscsi_poll_group_38 is idle 00:20:39.255 [load: 0%, idle: 393980958, busy: 231362] iscsi_poll_group_39 is idle 00:20:39.255 [load: 0%, idle: 393928966, busy: 231726] iscsi_poll_group_40 is idle 00:20:39.255 20:15:36 -- scheduler/idle.sh@1 -- # killprocess 2178512 00:20:39.255 20:15:36 -- common/autotest_common.sh@926 -- # '[' -z 2178512 ']' 00:20:39.255 20:15:36 -- common/autotest_common.sh@930 -- # kill -0 2178512 00:20:39.255 20:15:36 -- common/autotest_common.sh@931 -- # uname 00:20:39.255 20:15:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.255 20:15:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2178512 00:20:39.255 20:15:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:39.255 20:15:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:39.255 20:15:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2178512' 00:20:39.255 killing process with pid 2178512 00:20:39.255 20:15:36 -- common/autotest_common.sh@945 -- # kill 2178512 00:20:39.255 20:15:36 -- common/autotest_common.sh@950 -- # wait 2178512 00:20:39.255 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 4 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 37 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 38 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 39 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original 00:20:39.255 POWER: Power management governor of lcore 40 has been set to 'powersave' successfully 00:20:39.255 POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original 00:20:39.515 00:20:39.515 real 0m19.578s 00:20:39.515 user 0m41.800s 00:20:39.515 sys 0m2.837s 00:20:39.515 20:15:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:39.515 20:15:37 -- common/autotest_common.sh@10 -- # set +x 00:20:39.515 ************************************ 00:20:39.515 END TEST idle 00:20:39.515 ************************************ 00:20:39.515 20:15:37 -- scheduler/scheduler.sh@16 -- # run_test dpdk_governor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh 00:20:39.515 20:15:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:39.515 20:15:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:39.515 20:15:37 -- common/autotest_common.sh@10 -- # set +x 00:20:39.515 ************************************ 00:20:39.515 START TEST dpdk_governor 00:20:39.515 ************************************ 00:20:39.515 20:15:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh 00:20:39.777 * Looking for test storage... 00:20:39.777 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:20:39.777 20:15:37 -- scheduler/governor.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:20:39.777 20:15:37 -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:20:39.777 20:15:37 -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:20:39.777 20:15:37 -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:20:39.777 20:15:37 -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:20:39.777 20:15:37 -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:20:39.777 20:15:37 -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@256 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@257 -- # check_cgroup 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@10 -- # echo 2 00:20:39.777 20:15:37 -- scheduler/cgroups.sh@257 -- # cgroup_version=2 00:20:39.777 20:15:37 -- scheduler/governor.sh@12 -- # trap 'killprocess "$spdk_pid" || :; restore_cpufreq' EXIT 00:20:39.777 20:15:37 -- scheduler/governor.sh@157 -- # map_cpufreq 00:20:39.777 20:15:37 -- scheduler/common.sh@236 -- # cpufreq_drivers=() 00:20:39.777 20:15:37 -- scheduler/common.sh@236 -- # local -g cpufreq_drivers 00:20:39.777 20:15:37 -- scheduler/common.sh@237 -- # cpufreq_governors=() 00:20:39.777 20:15:37 -- scheduler/common.sh@237 -- # local -g cpufreq_governors 00:20:39.777 20:15:37 -- scheduler/common.sh@238 -- # cpufreq_base_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@238 -- # local -g cpufreq_base_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@239 -- # cpufreq_max_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@239 -- # local -g cpufreq_max_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@240 -- # cpufreq_min_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@240 -- # local -g cpufreq_min_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@241 -- # cpufreq_cur_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@241 -- # local -g cpufreq_cur_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@242 -- # cpufreq_is_turbo=() 00:20:39.777 20:15:37 -- scheduler/common.sh@242 -- # local -g cpufreq_is_turbo 00:20:39.777 20:15:37 -- scheduler/common.sh@243 -- # cpufreq_available_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@243 -- # local -g cpufreq_available_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@244 -- # cpufreq_available_governors=() 00:20:39.777 20:15:37 -- scheduler/common.sh@244 -- # local -g cpufreq_available_governors 00:20:39.777 20:15:37 -- scheduler/common.sh@245 -- # cpufreq_high_prio=() 00:20:39.777 20:15:37 -- scheduler/common.sh@245 -- # local -g cpufreq_high_prio 00:20:39.777 20:15:37 -- scheduler/common.sh@246 -- # cpufreq_non_turbo_ratio=() 00:20:39.777 20:15:37 -- scheduler/common.sh@246 -- # local -g cpufreq_non_turbo_ratio 00:20:39.777 20:15:37 -- scheduler/common.sh@247 -- # cpufreq_setspeed=() 00:20:39.777 20:15:37 -- scheduler/common.sh@247 -- # local -g cpufreq_setspeed 00:20:39.777 20:15:37 -- scheduler/common.sh@248 -- # cpuinfo_max_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@248 -- # local -g cpuinfo_max_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@249 -- # cpuinfo_min_freqs=() 00:20:39.777 20:15:37 -- scheduler/common.sh@249 -- # local -g cpuinfo_min_freqs 00:20:39.777 20:15:37 -- scheduler/common.sh@250 -- # local -g turbo_enabled=0 00:20:39.777 20:15:37 -- scheduler/common.sh@251 -- # local cpu cpu_idx 00:20:39.777 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.777 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=0 00:20:39.777 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq ]] 00:20:39.777 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.777 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.777 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq/base_frequency ]] 00:20:39.777 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.777 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=999962 00:20:39.778 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:39.778 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_0 00:20:39.778 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_0[@]' 00:20:39.778 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.778 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_0 00:20:39.778 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_0[@]' 00:20:39.778 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.778 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.778 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 0 0xce 00:20:39.778 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:39.778 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:39.778 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:39.778 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:39.778 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:39.778 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:39.778 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:39.778 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:39.778 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:39.778 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.778 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.778 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.778 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.778 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=1 00:20:39.778 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq ]] 00:20:39.778 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.778 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.778 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq/base_frequency ]] 00:20:39.778 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.778 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.778 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_1 00:20:39.778 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_1[@]' 00:20:39.778 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.778 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_1 00:20:39.778 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_1[@]' 00:20:39.778 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.778 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.778 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 1 0xce 00:20:39.778 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:39.779 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:39.779 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:39.779 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:39.779 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:39.779 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:39.779 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:39.779 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:39.779 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:39.779 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:39.779 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.779 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=10 00:20:39.779 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq ]] 00:20:39.779 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.779 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.779 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq/base_frequency ]] 00:20:39.779 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.779 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:39.779 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:39.779 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.779 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_10 00:20:39.779 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_10[@]' 00:20:39.779 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.779 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_10 00:20:39.779 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_10[@]' 00:20:39.779 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.779 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.779 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 10 0xce 00:20:39.779 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:39.779 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:39.779 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:39.779 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:39.779 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:39.779 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:39.779 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:39.779 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:39.779 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:39.779 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:39.779 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.779 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.779 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.779 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.780 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=11 00:20:39.780 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq ]] 00:20:39.780 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.780 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.780 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq/base_frequency ]] 00:20:39.780 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.780 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:39.780 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:39.780 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.780 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_11 00:20:39.780 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_11[@]' 00:20:39.780 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.780 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_11 00:20:39.780 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_11[@]' 00:20:39.780 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.780 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.780 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 11 0xce 00:20:39.780 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:39.780 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:39.780 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:39.780 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:39.780 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:39.780 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:39.780 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:39.780 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:39.780 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:39.780 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:39.780 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.780 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.780 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.780 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.781 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=12 00:20:39.781 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq ]] 00:20:39.781 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.781 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.781 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq/base_frequency ]] 00:20:39.781 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.781 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:39.781 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:39.781 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.781 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_12 00:20:39.781 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_12[@]' 00:20:39.781 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.781 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_12 00:20:39.781 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_12[@]' 00:20:39.781 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.781 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.781 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 12 0xce 00:20:39.781 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:39.781 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:39.781 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:39.781 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:39.781 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:39.781 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:39.781 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:39.781 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:39.781 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:39.781 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:39.781 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:39.781 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:39.781 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:39.781 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:39.781 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=13 00:20:39.781 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq ]] 00:20:39.781 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:39.781 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:39.781 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq/base_frequency ]] 00:20:39.781 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:39.782 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:39.782 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:39.782 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:39.782 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_13 00:20:39.782 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_13[@]' 00:20:39.782 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:39.782 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_13 00:20:39.782 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_13[@]' 00:20:39.782 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:39.782 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:39.782 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 13 0xce 00:20:40.044 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.044 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.044 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.044 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.044 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.044 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.044 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.044 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.044 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.044 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.044 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.044 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=14 00:20:40.044 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq ]] 00:20:40.044 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.044 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.044 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq/base_frequency ]] 00:20:40.044 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.044 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.044 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.044 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.044 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_14 00:20:40.044 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_14[@]' 00:20:40.044 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.044 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_14 00:20:40.044 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_14[@]' 00:20:40.044 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.044 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.044 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 14 0xce 00:20:40.044 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.044 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.044 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.044 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.044 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.044 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.044 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.044 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.044 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.044 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.044 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.044 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.044 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.044 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.045 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=15 00:20:40.045 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq ]] 00:20:40.045 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.045 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.045 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq/base_frequency ]] 00:20:40.045 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.045 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000086 00:20:40.045 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.045 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.045 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_15 00:20:40.045 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_15[@]' 00:20:40.045 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.045 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_15 00:20:40.045 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_15[@]' 00:20:40.045 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.045 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.045 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 15 0xce 00:20:40.045 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.045 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.045 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.045 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.045 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.045 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.045 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.045 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.045 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.045 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.045 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.045 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.045 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.045 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.045 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=16 00:20:40.045 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq ]] 00:20:40.046 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.046 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.046 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq/base_frequency ]] 00:20:40.046 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.046 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_16 00:20:40.046 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_16[@]' 00:20:40.046 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.046 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_16 00:20:40.046 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_16[@]' 00:20:40.046 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.046 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.046 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 16 0xce 00:20:40.046 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.046 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.046 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.046 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.046 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.046 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.046 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.046 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.046 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.046 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=17 00:20:40.046 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq ]] 00:20:40.046 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.046 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.046 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq/base_frequency ]] 00:20:40.046 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.046 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_17 00:20:40.046 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_17[@]' 00:20:40.046 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.046 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_17 00:20:40.046 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_17[@]' 00:20:40.046 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.046 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.046 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 17 0xce 00:20:40.046 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.046 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.046 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.046 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.046 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.046 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.046 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.046 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.046 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.046 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.046 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.046 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.046 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.047 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=18 00:20:40.047 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq ]] 00:20:40.047 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.047 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.047 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq/base_frequency ]] 00:20:40.047 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.047 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1001775 00:20:40.047 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.047 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.047 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_18 00:20:40.047 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_18[@]' 00:20:40.047 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.047 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_18 00:20:40.047 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_18[@]' 00:20:40.047 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.047 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.047 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 18 0xce 00:20:40.047 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.047 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.047 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.047 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.047 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.047 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.047 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.047 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.047 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.047 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.047 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.047 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.047 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.047 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.047 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=19 00:20:40.047 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq ]] 00:20:40.047 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.047 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.047 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq/base_frequency ]] 00:20:40.048 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.048 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.048 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:20:40.048 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.048 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_19 00:20:40.048 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_19[@]' 00:20:40.048 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.048 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_19 00:20:40.048 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_19[@]' 00:20:40.048 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.048 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.048 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 19 0xce 00:20:40.048 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.048 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.048 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.048 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.048 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.048 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.048 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.048 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.048 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.048 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.048 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.048 20:15:37 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.048 20:15:37 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.048 20:15:37 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.048 20:15:37 -- scheduler/common.sh@254 -- # cpu_idx=2 00:20:40.048 20:15:37 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq ]] 00:20:40.048 20:15:37 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.048 20:15:37 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.048 20:15:37 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq/base_frequency ]] 00:20:40.048 20:15:37 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.048 20:15:37 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=2300000 00:20:40.048 20:15:37 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300000 00:20:40.309 20:15:37 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=2300000 00:20:40.309 20:15:37 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_2 00:20:40.309 20:15:37 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_2[@]' 00:20:40.309 20:15:37 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.309 20:15:37 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_2 00:20:40.309 20:15:37 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_2[@]' 00:20:40.309 20:15:37 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.309 20:15:37 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.309 20:15:37 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 2 0xce 00:20:40.309 20:15:37 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.309 20:15:37 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.309 20:15:37 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.309 20:15:37 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.309 20:15:37 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.309 20:15:37 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.309 20:15:37 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.309 20:15:37 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.309 20:15:37 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.309 20:15:37 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.309 20:15:37 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.309 20:15:37 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.309 20:15:37 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.309 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.309 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.309 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.309 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.309 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.309 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.309 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.310 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=20 00:20:40.310 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq ]] 00:20:40.310 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.310 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.310 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq/base_frequency ]] 00:20:40.310 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.310 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.310 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:20:40.310 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.310 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_20 00:20:40.310 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_20[@]' 00:20:40.310 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.310 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_20 00:20:40.310 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_20[@]' 00:20:40.310 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.310 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.310 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 20 0xce 00:20:40.310 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.310 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.310 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.310 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.310 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.310 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.310 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.310 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.310 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.310 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.310 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.310 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.310 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.310 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.310 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=21 00:20:40.311 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq ]] 00:20:40.311 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.311 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.311 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq/base_frequency ]] 00:20:40.311 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:20:40.311 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_21 00:20:40.311 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_21[@]' 00:20:40.311 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.311 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_21 00:20:40.311 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_21[@]' 00:20:40.311 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.311 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.311 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 21 0xce 00:20:40.311 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.311 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.311 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.311 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.311 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.311 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.311 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.311 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.311 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.311 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=22 00:20:40.311 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq ]] 00:20:40.311 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.311 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.311 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq/base_frequency ]] 00:20:40.311 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=3700000 00:20:40.311 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_22 00:20:40.311 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_22[@]' 00:20:40.311 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.311 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_22 00:20:40.311 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_22[@]' 00:20:40.311 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.311 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.311 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 22 0xce 00:20:40.311 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.311 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.311 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.311 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.311 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.311 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.311 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.311 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.311 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.311 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.311 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.311 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.311 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.312 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=23 00:20:40.312 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq ]] 00:20:40.312 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.312 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.312 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq/base_frequency ]] 00:20:40.312 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.312 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.312 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.312 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.312 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_23 00:20:40.312 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_23[@]' 00:20:40.312 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.312 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_23 00:20:40.312 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_23[@]' 00:20:40.312 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.312 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.312 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 23 0xce 00:20:40.312 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.312 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.312 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.312 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.312 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.312 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.312 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.312 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.312 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.312 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.312 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.312 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.312 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.312 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.312 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=24 00:20:40.312 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq ]] 00:20:40.312 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.312 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.312 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq/base_frequency ]] 00:20:40.312 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.313 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1600000 00:20:40.313 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.313 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_24 00:20:40.313 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_24[@]' 00:20:40.313 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.313 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_24 00:20:40.313 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_24[@]' 00:20:40.313 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.313 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.313 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 24 0xce 00:20:40.313 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.313 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.313 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.313 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.313 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.313 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.313 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.313 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.313 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.313 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.313 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.313 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.313 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.313 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=25 00:20:40.313 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq ]] 00:20:40.313 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.313 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.313 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq/base_frequency ]] 00:20:40.313 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.313 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.313 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_25 00:20:40.313 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_25[@]' 00:20:40.313 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.313 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_25 00:20:40.313 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_25[@]' 00:20:40.313 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.313 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.313 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 25 0xce 00:20:40.313 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.313 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.313 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.313 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.313 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.575 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.575 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.575 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.575 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.575 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.575 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.575 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.575 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.575 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.575 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=26 00:20:40.575 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq ]] 00:20:40.575 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.575 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.575 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq/base_frequency ]] 00:20:40.575 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.575 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000359 00:20:40.575 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.575 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.575 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_26 00:20:40.576 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_26[@]' 00:20:40.576 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.576 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_26 00:20:40.576 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_26[@]' 00:20:40.576 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.576 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.576 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 26 0xce 00:20:40.576 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.576 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.576 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.576 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.576 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.576 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.576 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.576 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.576 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.576 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.576 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.576 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=27 00:20:40.576 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq ]] 00:20:40.576 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.576 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.576 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq/base_frequency ]] 00:20:40.576 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.576 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000618 00:20:40.576 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.576 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.576 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_27 00:20:40.576 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_27[@]' 00:20:40.576 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.576 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_27 00:20:40.576 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_27[@]' 00:20:40.576 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.576 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.576 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 27 0xce 00:20:40.576 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.576 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.576 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.576 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.576 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.576 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.576 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.576 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.576 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.576 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.576 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.576 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.576 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.576 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.577 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=28 00:20:40.577 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq ]] 00:20:40.577 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.577 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.577 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq/base_frequency ]] 00:20:40.577 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.577 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.577 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.577 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.577 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_28 00:20:40.577 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_28[@]' 00:20:40.577 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.577 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_28 00:20:40.577 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_28[@]' 00:20:40.577 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.577 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.577 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 28 0xce 00:20:40.577 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.577 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.577 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.577 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.577 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.577 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.577 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.577 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.577 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.577 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.577 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.577 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.577 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.577 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.578 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=29 00:20:40.578 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq ]] 00:20:40.578 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.578 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.578 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq/base_frequency ]] 00:20:40.578 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.578 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.578 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_29 00:20:40.578 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_29[@]' 00:20:40.578 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.578 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_29 00:20:40.578 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_29[@]' 00:20:40.578 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.578 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.578 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 29 0xce 00:20:40.578 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.578 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.578 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.578 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.578 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.578 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.578 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.578 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.578 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.578 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.578 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.578 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.578 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.578 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=3 00:20:40.578 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq ]] 00:20:40.578 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.578 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.578 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq/base_frequency ]] 00:20:40.578 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.578 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.578 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.578 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_3 00:20:40.579 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_3[@]' 00:20:40.579 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.579 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_3 00:20:40.579 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_3[@]' 00:20:40.579 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.579 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.579 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 3 0xce 00:20:40.579 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.579 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.579 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.579 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.579 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.579 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.579 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.579 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.579 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.579 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.579 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.579 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.579 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.579 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.579 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=30 00:20:40.579 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq ]] 00:20:40.579 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.579 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.579 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq/base_frequency ]] 00:20:40.579 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.579 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.579 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.579 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.579 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_30 00:20:40.579 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_30[@]' 00:20:40.579 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.579 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_30 00:20:40.579 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_30[@]' 00:20:40.579 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.579 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.579 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 30 0xce 00:20:40.579 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.579 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.579 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.579 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.579 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.579 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.579 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.579 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.579 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.580 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.580 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.580 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.580 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.580 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.580 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=31 00:20:40.580 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq ]] 00:20:40.580 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.580 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.580 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq/base_frequency ]] 00:20:40.580 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.580 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1031452 00:20:40.580 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.580 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.580 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_31 00:20:40.580 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_31[@]' 00:20:40.580 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.580 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_31 00:20:40.580 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_31[@]' 00:20:40.580 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.580 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.580 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 31 0xce 00:20:40.842 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.842 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.842 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.842 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.842 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.842 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.842 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.842 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.842 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.842 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.842 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.842 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=32 00:20:40.842 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq ]] 00:20:40.842 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.842 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.842 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq/base_frequency ]] 00:20:40.842 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.842 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.842 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.842 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.842 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_32 00:20:40.842 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_32[@]' 00:20:40.842 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.842 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_32 00:20:40.842 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_32[@]' 00:20:40.842 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.842 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.842 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 32 0xce 00:20:40.842 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.842 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.842 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.842 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.842 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.842 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.842 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.842 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.842 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.842 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.842 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.842 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.842 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.842 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.843 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=33 00:20:40.843 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq ]] 00:20:40.843 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.843 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.843 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq/base_frequency ]] 00:20:40.843 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.843 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.843 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_33 00:20:40.843 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_33[@]' 00:20:40.843 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.843 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_33 00:20:40.843 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_33[@]' 00:20:40.843 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.843 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.843 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 33 0xce 00:20:40.843 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.843 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.843 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.843 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.843 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.843 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.843 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.843 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.843 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.843 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.843 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.843 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.843 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.843 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=34 00:20:40.843 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq ]] 00:20:40.843 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.843 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.843 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq/base_frequency ]] 00:20:40.843 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.843 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.843 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.843 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_34 00:20:40.843 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_34[@]' 00:20:40.844 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.844 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_34 00:20:40.844 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_34[@]' 00:20:40.844 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.844 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.844 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 34 0xce 00:20:40.844 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.844 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.844 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.844 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.844 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.844 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.844 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.844 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.844 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.844 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.844 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.844 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=35 00:20:40.844 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq ]] 00:20:40.844 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.844 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.844 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq/base_frequency ]] 00:20:40.844 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.844 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.844 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.844 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.844 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_35 00:20:40.844 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_35[@]' 00:20:40.844 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.844 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_35 00:20:40.844 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_35[@]' 00:20:40.844 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.844 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.844 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 35 0xce 00:20:40.844 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.844 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.844 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.844 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.844 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.844 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.844 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.844 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.844 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.844 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.844 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.844 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.844 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.844 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.845 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=36 00:20:40.845 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq ]] 00:20:40.845 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.845 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.845 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq/base_frequency ]] 00:20:40.845 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.845 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.845 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_36 00:20:40.845 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_36[@]' 00:20:40.845 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.845 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_36 00:20:40.845 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_36[@]' 00:20:40.845 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.845 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.845 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 36 0xce 00:20:40.845 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.845 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.845 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.845 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.845 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.845 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.845 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.845 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.845 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.845 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.845 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.845 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.845 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:40.845 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=37 00:20:40.845 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq ]] 00:20:40.845 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:40.845 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:40.845 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq/base_frequency ]] 00:20:40.845 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:40.845 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:40.845 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:40.845 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_37 00:20:40.845 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_37[@]' 00:20:40.845 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:40.845 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_37 00:20:40.845 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_37[@]' 00:20:40.845 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:40.845 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:40.845 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 37 0xce 00:20:40.845 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:40.845 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:40.846 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:40.846 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:40.846 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:40.846 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:40.846 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:40.846 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:40.846 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:40.846 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:40.846 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:40.846 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:40.846 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:40.846 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.108 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.108 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.108 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.108 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=38 00:20:41.109 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq ]] 00:20:41.109 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.109 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.109 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq/base_frequency ]] 00:20:41.109 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.109 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.109 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_38 00:20:41.109 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_38[@]' 00:20:41.109 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.109 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_38 00:20:41.109 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_38[@]' 00:20:41.109 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.109 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.109 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 38 0xce 00:20:41.109 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.109 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.109 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.109 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.109 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.109 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.109 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.109 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.109 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.109 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.109 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.109 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.109 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.109 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=39 00:20:41.109 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq ]] 00:20:41.109 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.109 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.109 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq/base_frequency ]] 00:20:41.109 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.109 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.109 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_39 00:20:41.109 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_39[@]' 00:20:41.109 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.109 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_39 00:20:41.109 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_39[@]' 00:20:41.109 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.109 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.109 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 39 0xce 00:20:41.109 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.109 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.109 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.109 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.109 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.109 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.109 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.109 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.110 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.110 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.110 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.110 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=4 00:20:41.110 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq ]] 00:20:41.110 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.110 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.110 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq/base_frequency ]] 00:20:41.110 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.110 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.110 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.110 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.110 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_4 00:20:41.110 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_4[@]' 00:20:41.110 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.110 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_4 00:20:41.110 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_4[@]' 00:20:41.110 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.110 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.110 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 4 0xce 00:20:41.110 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.110 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.110 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.110 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.110 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.110 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.110 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.110 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.110 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.110 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.110 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.110 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.110 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.110 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.111 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=40 00:20:41.111 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq ]] 00:20:41.111 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.111 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.111 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq/base_frequency ]] 00:20:41.111 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.111 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.111 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.111 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.111 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_40 00:20:41.111 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_40[@]' 00:20:41.111 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.111 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_40 00:20:41.111 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_40[@]' 00:20:41.111 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.111 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.111 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 40 0xce 00:20:41.111 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.111 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.111 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.111 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.111 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.111 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.111 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.111 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.111 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.111 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.111 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.111 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.111 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.111 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.112 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=41 00:20:41.112 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq ]] 00:20:41.112 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.112 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.112 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq/base_frequency ]] 00:20:41.112 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.112 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.112 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_41 00:20:41.112 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_41[@]' 00:20:41.112 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.112 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_41 00:20:41.112 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_41[@]' 00:20:41.112 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.112 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.112 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 41 0xce 00:20:41.112 20:15:38 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.112 20:15:38 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.112 20:15:38 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.112 20:15:38 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.112 20:15:38 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.112 20:15:38 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.112 20:15:38 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.112 20:15:38 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.112 20:15:38 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.112 20:15:38 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.112 20:15:38 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.112 20:15:38 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.112 20:15:38 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.112 20:15:38 -- scheduler/common.sh@254 -- # cpu_idx=42 00:20:41.112 20:15:38 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq ]] 00:20:41.112 20:15:38 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.112 20:15:38 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.112 20:15:38 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq/base_frequency ]] 00:20:41.112 20:15:38 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.112 20:15:38 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.112 20:15:38 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.112 20:15:38 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_42 00:20:41.112 20:15:38 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_42[@]' 00:20:41.112 20:15:38 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.112 20:15:38 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_42 00:20:41.112 20:15:38 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_42[@]' 00:20:41.112 20:15:38 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.112 20:15:38 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.113 20:15:38 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 42 0xce 00:20:41.113 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.113 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.113 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.113 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.113 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.113 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.113 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.113 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.113 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.113 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.113 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.113 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.113 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.113 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.113 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=43 00:20:41.113 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq ]] 00:20:41.113 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.113 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.113 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq/base_frequency ]] 00:20:41.113 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.113 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.113 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.113 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.113 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_43 00:20:41.113 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_43[@]' 00:20:41.113 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.113 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_43 00:20:41.113 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_43[@]' 00:20:41.113 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.113 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.113 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 43 0xce 00:20:41.375 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.375 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.375 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.375 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.375 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.375 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.375 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.375 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.375 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.375 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.375 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.375 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.375 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.375 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.376 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=44 00:20:41.376 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq ]] 00:20:41.376 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.376 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.376 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq/base_frequency ]] 00:20:41.376 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.376 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_44 00:20:41.376 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_44[@]' 00:20:41.376 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.376 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_44 00:20:41.376 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_44[@]' 00:20:41.376 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.376 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.376 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 44 0xce 00:20:41.376 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.376 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.376 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.376 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.376 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.376 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.376 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.376 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.376 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.376 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=45 00:20:41.376 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq ]] 00:20:41.376 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.376 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.376 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq/base_frequency ]] 00:20:41.376 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.376 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_45 00:20:41.376 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_45[@]' 00:20:41.376 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.376 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_45 00:20:41.376 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_45[@]' 00:20:41.376 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.376 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.376 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 45 0xce 00:20:41.376 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.376 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.376 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.376 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.376 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.376 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.376 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.376 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.376 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.376 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.376 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.376 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.376 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.377 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=46 00:20:41.377 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq ]] 00:20:41.377 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.377 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.377 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq/base_frequency ]] 00:20:41.377 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.377 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.377 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.377 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.377 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_46 00:20:41.377 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_46[@]' 00:20:41.377 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.377 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_46 00:20:41.377 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_46[@]' 00:20:41.377 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.377 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.377 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 46 0xce 00:20:41.377 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.377 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.377 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.377 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.377 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.377 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.377 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.377 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.377 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.377 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.377 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.377 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.377 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.377 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.378 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=47 00:20:41.378 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq ]] 00:20:41.378 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.378 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.378 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq/base_frequency ]] 00:20:41.378 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.378 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_47 00:20:41.378 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_47[@]' 00:20:41.378 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.378 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_47 00:20:41.378 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_47[@]' 00:20:41.378 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.378 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.378 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 47 0xce 00:20:41.378 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.378 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.378 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.378 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.378 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.378 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.378 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.378 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.378 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.378 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=48 00:20:41.378 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq ]] 00:20:41.378 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.378 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.378 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq/base_frequency ]] 00:20:41.378 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.378 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_48 00:20:41.378 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_48[@]' 00:20:41.378 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.378 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_48 00:20:41.378 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_48[@]' 00:20:41.378 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.378 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.378 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 48 0xce 00:20:41.378 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.378 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.378 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.378 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.378 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.378 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.378 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.378 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.378 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.378 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.378 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.378 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.378 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.379 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=49 00:20:41.379 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq ]] 00:20:41.379 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.379 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.379 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq/base_frequency ]] 00:20:41.379 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.379 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.379 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.379 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.379 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_49 00:20:41.379 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_49[@]' 00:20:41.379 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.379 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_49 00:20:41.379 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_49[@]' 00:20:41.379 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.379 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.379 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 49 0xce 00:20:41.379 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.379 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.379 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.379 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.379 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.379 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.379 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.379 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.379 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.379 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.379 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.379 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.379 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.379 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.379 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=5 00:20:41.379 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq ]] 00:20:41.379 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.379 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.379 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq/base_frequency ]] 00:20:41.379 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.380 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.380 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.380 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.641 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_5 00:20:41.641 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_5[@]' 00:20:41.641 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.641 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_5 00:20:41.641 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_5[@]' 00:20:41.641 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.641 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.641 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 5 0xce 00:20:41.641 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.641 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.641 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.641 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.641 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.641 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.641 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.641 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.641 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.641 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.641 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.641 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.641 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.641 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.642 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=50 00:20:41.642 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq ]] 00:20:41.642 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.642 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.642 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq/base_frequency ]] 00:20:41.642 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.642 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.642 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_50 00:20:41.642 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_50[@]' 00:20:41.642 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.642 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_50 00:20:41.642 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_50[@]' 00:20:41.642 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.642 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.642 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 50 0xce 00:20:41.642 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.642 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.642 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.642 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.642 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.642 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.642 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.642 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.642 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.642 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.642 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.642 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.642 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.642 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=51 00:20:41.642 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq ]] 00:20:41.642 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.642 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.642 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq/base_frequency ]] 00:20:41.642 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.642 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.642 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_51 00:20:41.642 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_51[@]' 00:20:41.642 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.642 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_51 00:20:41.642 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_51[@]' 00:20:41.642 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.642 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.642 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 51 0xce 00:20:41.642 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.642 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.642 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.642 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.643 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.643 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.643 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.643 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.643 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.643 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.643 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.643 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=52 00:20:41.643 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq ]] 00:20:41.643 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.643 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.643 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq/base_frequency ]] 00:20:41.643 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.643 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.643 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.643 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.643 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_52 00:20:41.643 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_52[@]' 00:20:41.643 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.643 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_52 00:20:41.643 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_52[@]' 00:20:41.643 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.643 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.643 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 52 0xce 00:20:41.643 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.643 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.643 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.643 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.643 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.643 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.643 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.643 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.643 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.643 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.643 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.643 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.643 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.643 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.644 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=53 00:20:41.644 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq ]] 00:20:41.644 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.644 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.644 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq/base_frequency ]] 00:20:41.644 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.644 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000011 00:20:41.644 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.644 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_53 00:20:41.644 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_53[@]' 00:20:41.644 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.644 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_53 00:20:41.644 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_53[@]' 00:20:41.644 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.644 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.644 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 53 0xce 00:20:41.644 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.644 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.644 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.644 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.644 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.644 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.644 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.644 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.644 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.644 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.644 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.644 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=54 00:20:41.644 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq ]] 00:20:41.644 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.644 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.644 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq/base_frequency ]] 00:20:41.644 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.644 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.644 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_54 00:20:41.644 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_54[@]' 00:20:41.644 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.644 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_54 00:20:41.644 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_54[@]' 00:20:41.644 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.644 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.644 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 54 0xce 00:20:41.644 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.644 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.644 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.644 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.644 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.644 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.644 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.644 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.644 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.644 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.644 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.644 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.644 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.645 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.645 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.645 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.645 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=55 00:20:41.645 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq ]] 00:20:41.645 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.645 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.645 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq/base_frequency ]] 00:20:41.645 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.645 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000032 00:20:41.645 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.645 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.645 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_55 00:20:41.645 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_55[@]' 00:20:41.645 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.645 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_55 00:20:41.645 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_55[@]' 00:20:41.645 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.645 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.645 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 55 0xce 00:20:41.645 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.645 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.645 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.907 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.907 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.907 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.907 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.907 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.907 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.907 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.907 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.907 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.907 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.907 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.908 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=56 00:20:41.908 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq ]] 00:20:41.908 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.908 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.908 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq/base_frequency ]] 00:20:41.908 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.908 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.908 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_56 00:20:41.908 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_56[@]' 00:20:41.908 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.908 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_56 00:20:41.908 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_56[@]' 00:20:41.908 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.908 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.908 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 56 0xce 00:20:41.908 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.908 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.908 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.908 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.908 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.908 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.908 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.908 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.908 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.908 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.908 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.908 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.908 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.908 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=57 00:20:41.908 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq ]] 00:20:41.908 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.908 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.908 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq/base_frequency ]] 00:20:41.908 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.908 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.908 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_57 00:20:41.908 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_57[@]' 00:20:41.908 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.908 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_57 00:20:41.908 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_57[@]' 00:20:41.908 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.908 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.908 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 57 0xce 00:20:41.908 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.908 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.908 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.908 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.908 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.909 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.909 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.909 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.909 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.909 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.909 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.909 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=58 00:20:41.909 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq ]] 00:20:41.909 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.909 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.909 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq/base_frequency ]] 00:20:41.909 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.909 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.909 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.909 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.909 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_58 00:20:41.909 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_58[@]' 00:20:41.909 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.909 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_58 00:20:41.909 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_58[@]' 00:20:41.909 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.909 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.909 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 58 0xce 00:20:41.909 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.909 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.909 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.909 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.909 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.909 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.909 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.909 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.909 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.909 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.909 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.909 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.909 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.909 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.910 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=59 00:20:41.910 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq ]] 00:20:41.910 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.910 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.910 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq/base_frequency ]] 00:20:41.910 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.910 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.910 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_59 00:20:41.910 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_59[@]' 00:20:41.910 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.910 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_59 00:20:41.910 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_59[@]' 00:20:41.910 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.910 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.910 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 59 0xce 00:20:41.910 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.910 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.910 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.910 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.910 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.910 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.910 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.910 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.910 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.910 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.910 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.910 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=6 00:20:41.910 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq ]] 00:20:41.910 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.910 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.910 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq/base_frequency ]] 00:20:41.910 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.910 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000045 00:20:41.910 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.910 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_6 00:20:41.910 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_6[@]' 00:20:41.910 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.910 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_6 00:20:41.910 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_6[@]' 00:20:41.910 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.910 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.910 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 6 0xce 00:20:41.910 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.910 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.910 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.910 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.910 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.910 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.910 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.910 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.910 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.910 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.910 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.910 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.911 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=60 00:20:41.911 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq ]] 00:20:41.911 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.911 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.911 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq/base_frequency ]] 00:20:41.911 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.911 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:41.911 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.911 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.911 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_60 00:20:41.911 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_60[@]' 00:20:41.911 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.911 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_60 00:20:41.911 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_60[@]' 00:20:41.911 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.911 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.911 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 60 0xce 00:20:41.911 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:41.911 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:41.911 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:41.911 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:41.911 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:41.911 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:41.911 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:41.911 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:41.911 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:41.911 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:41.911 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.911 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.911 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.911 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.912 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.912 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.912 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.912 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:41.912 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:41.912 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:41.912 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:41.912 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=61 00:20:41.912 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq ]] 00:20:41.912 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:41.912 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:41.912 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq/base_frequency ]] 00:20:41.912 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:41.912 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000204 00:20:41.912 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:41.912 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:41.912 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_61 00:20:41.912 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_61[@]' 00:20:41.912 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:41.912 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_61 00:20:41.912 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_61[@]' 00:20:41.912 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:41.912 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:41.912 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 61 0xce 00:20:42.174 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.174 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.174 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.174 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.174 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.174 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.174 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.174 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.174 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.174 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.174 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.174 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.174 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.174 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.175 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=62 00:20:42.175 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq ]] 00:20:42.175 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.175 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.175 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq/base_frequency ]] 00:20:42.175 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.175 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.175 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.175 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.175 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_62 00:20:42.175 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_62[@]' 00:20:42.175 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.175 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_62 00:20:42.175 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_62[@]' 00:20:42.175 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.175 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.175 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 62 0xce 00:20:42.175 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.175 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.175 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.175 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.175 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.175 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.175 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.175 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.175 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.175 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.175 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.175 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.175 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.175 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.175 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=63 00:20:42.175 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq ]] 00:20:42.175 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.175 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.176 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq/base_frequency ]] 00:20:42.176 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.176 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.176 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_63 00:20:42.176 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_63[@]' 00:20:42.176 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.176 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_63 00:20:42.176 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_63[@]' 00:20:42.176 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.176 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.176 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 63 0xce 00:20:42.176 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.176 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.176 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.176 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.176 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.176 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.176 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.176 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.176 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.176 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.176 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.176 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.176 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.176 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=64 00:20:42.176 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq ]] 00:20:42.176 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.176 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.176 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq/base_frequency ]] 00:20:42.176 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.176 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.176 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_64 00:20:42.176 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_64[@]' 00:20:42.176 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.176 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_64 00:20:42.176 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_64[@]' 00:20:42.176 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.176 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.176 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 64 0xce 00:20:42.176 20:15:39 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.176 20:15:39 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.176 20:15:39 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.176 20:15:39 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.176 20:15:39 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.176 20:15:39 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.176 20:15:39 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.177 20:15:39 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.177 20:15:39 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.177 20:15:39 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.177 20:15:39 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:39 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:39 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:39 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.177 20:15:39 -- scheduler/common.sh@254 -- # cpu_idx=65 00:20:42.177 20:15:39 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq ]] 00:20:42.177 20:15:39 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.177 20:15:39 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.177 20:15:39 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq/base_frequency ]] 00:20:42.177 20:15:39 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.177 20:15:39 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.177 20:15:39 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.177 20:15:39 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.177 20:15:39 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_65 00:20:42.177 20:15:39 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_65[@]' 00:20:42.177 20:15:39 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.177 20:15:39 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_65 00:20:42.177 20:15:39 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_65[@]' 00:20:42.177 20:15:39 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.177 20:15:39 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.177 20:15:39 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 65 0xce 00:20:42.177 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.177 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.177 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.177 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.177 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.177 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.177 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.177 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.177 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.177 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.177 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.177 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.177 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.177 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.178 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=66 00:20:42.178 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq ]] 00:20:42.178 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.178 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.178 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq/base_frequency ]] 00:20:42.178 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.178 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.178 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.178 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.178 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_66 00:20:42.178 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_66[@]' 00:20:42.178 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.178 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_66 00:20:42.178 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_66[@]' 00:20:42.178 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.178 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.178 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 66 0xce 00:20:42.178 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.178 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.178 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.178 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.178 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.178 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.178 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.178 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.178 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.178 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.178 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.178 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.178 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.178 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.179 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=67 00:20:42.179 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq ]] 00:20:42.179 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.179 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.179 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq/base_frequency ]] 00:20:42.179 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.179 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.179 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.179 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.179 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_67 00:20:42.179 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_67[@]' 00:20:42.179 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.179 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_67 00:20:42.179 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_67[@]' 00:20:42.179 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.179 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.179 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 67 0xce 00:20:42.179 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.179 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.179 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.179 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.179 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.179 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.179 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.179 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.179 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.179 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.179 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.179 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.179 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.179 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.442 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=68 00:20:42.442 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq ]] 00:20:42.442 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.442 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.442 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq/base_frequency ]] 00:20:42.442 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.442 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.442 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.442 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.442 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_68 00:20:42.442 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_68[@]' 00:20:42.442 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.442 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_68 00:20:42.442 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_68[@]' 00:20:42.442 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.442 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.442 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 68 0xce 00:20:42.442 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.442 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.442 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.442 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.442 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.442 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.442 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.442 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.442 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.442 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.442 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.442 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.442 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.442 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.443 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=69 00:20:42.443 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq ]] 00:20:42.443 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.443 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.443 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq/base_frequency ]] 00:20:42.443 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.443 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.443 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.443 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.443 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_69 00:20:42.443 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_69[@]' 00:20:42.443 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.443 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_69 00:20:42.443 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_69[@]' 00:20:42.443 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.443 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.443 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 69 0xce 00:20:42.443 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.443 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.443 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.443 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.443 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.443 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.443 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.443 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.443 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.443 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.443 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.443 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.443 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.443 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.444 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=7 00:20:42.444 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq ]] 00:20:42.444 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.444 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.444 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq/base_frequency ]] 00:20:42.444 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000076 00:20:42.444 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.444 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_7 00:20:42.444 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_7[@]' 00:20:42.444 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.444 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_7 00:20:42.444 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_7[@]' 00:20:42.444 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.444 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.444 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 7 0xce 00:20:42.444 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.444 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.444 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.444 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.444 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.444 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.444 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.444 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.444 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.444 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=70 00:20:42.444 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq ]] 00:20:42.444 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.444 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.444 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq/base_frequency ]] 00:20:42.444 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.444 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_70 00:20:42.444 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_70[@]' 00:20:42.444 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.444 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_70 00:20:42.444 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_70[@]' 00:20:42.444 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.444 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.444 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 70 0xce 00:20:42.444 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.444 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.444 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.444 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.444 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.444 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.444 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.444 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.444 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.444 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.444 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.444 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.444 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.445 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=71 00:20:42.445 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq ]] 00:20:42.445 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.445 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.445 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq/base_frequency ]] 00:20:42.445 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.445 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000005 00:20:42.445 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.445 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.445 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_71 00:20:42.445 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_71[@]' 00:20:42.445 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.445 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_71 00:20:42.445 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_71[@]' 00:20:42.445 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.445 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.445 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 71 0xce 00:20:42.445 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.445 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.445 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.445 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.445 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.445 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.445 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.445 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.445 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.445 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.445 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.445 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.445 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.445 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.445 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=8 00:20:42.445 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq ]] 00:20:42.445 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.445 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.445 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq/base_frequency ]] 00:20:42.445 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.446 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000020 00:20:42.446 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.446 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.446 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_8 00:20:42.446 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_8[@]' 00:20:42.446 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.446 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_8 00:20:42.446 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_8[@]' 00:20:42.446 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.446 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.446 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 8 0xce 00:20:42.446 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.446 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.446 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.446 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.446 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.446 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.446 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.446 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.446 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.446 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.446 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.446 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.446 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.446 20:15:40 -- scheduler/common.sh@253 -- # for cpu in "$sysfs_cpu/cpu"+([0-9]) 00:20:42.446 20:15:40 -- scheduler/common.sh@254 -- # cpu_idx=9 00:20:42.446 20:15:40 -- scheduler/common.sh@255 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq ]] 00:20:42.446 20:15:40 -- scheduler/common.sh@256 -- # cpufreq_drivers[cpu_idx]=intel_pstate 00:20:42.446 20:15:40 -- scheduler/common.sh@257 -- # cpufreq_governors[cpu_idx]=powersave 00:20:42.446 20:15:40 -- scheduler/common.sh@260 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq/base_frequency ]] 00:20:42.446 20:15:40 -- scheduler/common.sh@261 -- # cpufreq_base_freqs[cpu_idx]=2300000 00:20:42.446 20:15:40 -- scheduler/common.sh@264 -- # cpufreq_cur_freqs[cpu_idx]=1000216 00:20:42.446 20:15:40 -- scheduler/common.sh@265 -- # cpufreq_max_freqs[cpu_idx]=2300001 00:20:42.446 20:15:40 -- scheduler/common.sh@266 -- # cpufreq_min_freqs[cpu_idx]=1000000 00:20:42.446 20:15:40 -- scheduler/common.sh@268 -- # local -n available_governors=available_governors_cpu_9 00:20:42.446 20:15:40 -- scheduler/common.sh@269 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_9[@]' 00:20:42.446 20:15:40 -- scheduler/common.sh@270 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors")) 00:20:42.446 20:15:40 -- scheduler/common.sh@272 -- # local -n available_freqs=available_freqs_cpu_9 00:20:42.446 20:15:40 -- scheduler/common.sh@273 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_9[@]' 00:20:42.446 20:15:40 -- scheduler/common.sh@275 -- # case "${cpufreq_drivers[cpu_idx]}" in 00:20:42.446 20:15:40 -- scheduler/common.sh@286 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0 00:20:42.446 20:15:40 -- scheduler/common.sh@288 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 9 0xce 00:20:42.706 20:15:40 -- scheduler/common.sh@288 -- # non_turbo_ratio=0x70a2cf3811700 00:20:42.706 20:15:40 -- scheduler/common.sh@289 -- # cpuinfo_min_freqs[cpu_idx]=1000000 00:20:42.706 20:15:40 -- scheduler/common.sh@290 -- # cpuinfo_max_freqs[cpu_idx]=3700000 00:20:42.706 20:15:40 -- scheduler/common.sh@291 -- # cpufreq_non_turbo_ratio[cpu_idx]=23 00:20:42.706 20:15:40 -- scheduler/common.sh@292 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] )) 00:20:42.706 20:15:40 -- scheduler/common.sh@296 -- # cpufreq_high_prio[cpu_idx]=0 00:20:42.706 20:15:40 -- scheduler/common.sh@297 -- # base_max_freq=2300000 00:20:42.706 20:15:40 -- scheduler/common.sh@299 -- # num_freqs=14 00:20:42.706 20:15:40 -- scheduler/common.sh@300 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] )) 00:20:42.706 20:15:40 -- scheduler/common.sh@301 -- # (( num_freqs += 1 )) 00:20:42.706 20:15:40 -- scheduler/common.sh@302 -- # cpufreq_is_turbo[cpu_idx]=1 00:20:42.706 20:15:40 -- scheduler/common.sh@306 -- # available_freqs=() 00:20:42.706 20:15:40 -- scheduler/common.sh@307 -- # (( freq = 0 )) 00:20:42.706 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.706 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.706 20:15:40 -- scheduler/common.sh@309 -- # available_freqs[freq]=2300001 00:20:42.706 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.706 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2300000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2200000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2100000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=2000000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1900000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1800000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1700000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1600000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1500000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1400000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1300000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1200000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1100000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@308 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 )) 00:20:42.707 20:15:40 -- scheduler/common.sh@311 -- # available_freqs[freq]=1000000 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq++ )) 00:20:42.707 20:15:40 -- scheduler/common.sh@307 -- # (( freq < num_freqs )) 00:20:42.707 20:15:40 -- scheduler/common.sh@352 -- # [[ -e /sys/devices/system/cpu/cpufreq/boost ]] 00:20:42.707 20:15:40 -- scheduler/common.sh@354 -- # [[ -e /sys/devices/system/cpu/intel_pstate/no_turbo ]] 00:20:42.707 20:15:40 -- scheduler/common.sh@355 -- # turbo_enabled=1 00:20:42.707 20:15:40 -- scheduler/governor.sh@159 -- # initial_main_core_governor=powersave 00:20:42.707 20:15:40 -- scheduler/governor.sh@161 -- # verify_dpdk_governor 00:20:42.707 20:15:40 -- scheduler/governor.sh@60 -- # xtrace_disable 00:20:42.707 20:15:40 -- common/autotest_common.sh@10 -- # set +x 00:20:42.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.967 [2024-04-25 20:15:40.677515] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:42.967 [2024-04-25 20:15:40.677594] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182637 ] 00:20:42.967 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.967 [2024-04-25 20:15:40.772270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8 00:20:42.967 [2024-04-25 20:15:40.884872] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:42.967 [2024-04-25 20:15:40.885093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.967 [2024-04-25 20:15:40.885192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.967 [2024-04-25 20:15:40.885291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:42.967 [2024-04-25 20:15:40.885407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40 00:20:42.967 [2024-04-25 20:15:40.885331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38 00:20:42.967 [2024-04-25 20:15:40.885366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39 00:20:42.967 [2024-04-25 20:15:40.885408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.967 [2024-04-25 20:15:40.885307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37 00:20:43.904 POWER: Env isn't set yet! 00:20:43.904 POWER: Attempting to initialise ACPI cpufreq power management... 00:20:43.904 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:20:43.904 POWER: Cannot set governor of lcore 1 to userspace 00:20:43.904 POWER: Attempting to initialise PSTAT power management... 00:20:43.904 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 1 power management 00:20:43.904 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 2 power management 00:20:43.904 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 3 power management 00:20:43.904 POWER: Power management governor of lcore 4 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 4 power management 00:20:43.904 POWER: Power management governor of lcore 37 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 37 power management 00:20:43.904 POWER: Power management governor of lcore 38 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 38 power management 00:20:43.904 POWER: Power management governor of lcore 39 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 39 power management 00:20:43.904 POWER: Power management governor of lcore 40 has been set to 'performance' successfully 00:20:43.904 POWER: Initialized successfully for lcore 40 power management 00:20:44.473 [2024-04-25 20:15:42.161195] 'OCF_Core' volume operations registered 00:20:44.473 [2024-04-25 20:15:42.164348] 'OCF_Cache' volume operations registered 00:20:44.473 [2024-04-25 20:15:42.167880] 'OCF Composite' volume operations registered 00:20:44.473 [2024-04-25 20:15:42.171084] 'SPDK_block_device' volume operations registered 00:20:46.376 Waiting for samples... 00:20:47.753 MAIN DPDK cpu1 current frequency at 2199999 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2200000 KHz 00:20:49.658 MAIN DPDK cpu1 current frequency at 2000002 KHz (1000000-2300001 KHz), set frequency 1900000 KHz < 2000000 KHz 00:20:51.034 MAIN DPDK cpu1 current frequency at 1900000 KHz (1000000-2300001 KHz), set frequency 1700000 KHz < 1900000 KHz 00:20:52.994 MAIN DPDK cpu1 current frequency at 1699999 KHz (1000000-2300001 KHz), set frequency 1500000 KHz < 1700000 KHz 00:20:54.372 MAIN DPDK cpu1 current frequency at 1499996 KHz (1000000-2300001 KHz), set frequency 1400000 KHz < 1500000 KHz 00:20:56.268 MAIN DPDK cpu1 current frequency at 1399997 KHz (1000000-2300001 KHz), set frequency 1200000 KHz < 1400000 KHz 00:20:57.642 MAIN DPDK cpu1 current frequency at 1200001 KHz (1000000-2300001 KHz), set frequency 1000000 KHz < 1200000 KHz 00:20:57.642 Main cpu1 frequency dropped by 84% 00:20:57.642 20:15:55 -- scheduler/governor.sh@1 -- # killprocess 2182637 00:20:57.642 20:15:55 -- common/autotest_common.sh@926 -- # '[' -z 2182637 ']' 00:20:57.642 20:15:55 -- common/autotest_common.sh@930 -- # kill -0 2182637 00:20:57.642 20:15:55 -- common/autotest_common.sh@931 -- # uname 00:20:57.642 20:15:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.642 20:15:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182637 00:20:57.900 20:15:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:57.900 20:15:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:57.900 20:15:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182637' 00:20:57.900 killing process with pid 2182637 00:20:57.900 20:15:55 -- common/autotest_common.sh@945 -- # kill 2182637 00:20:57.900 20:15:55 -- common/autotest_common.sh@950 -- # wait 2182637 00:20:58.159 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 4 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 37 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 38 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 39 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original 00:20:58.159 POWER: Power management governor of lcore 40 has been set to 'powersave' successfully 00:20:58.159 POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original 00:20:58.731 20:15:56 -- scheduler/governor.sh@1 -- # restore_cpufreq 00:20:58.731 20:15:56 -- scheduler/governor.sh@15 -- # local cpu 00:20:58.731 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.731 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 1 1000000 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@360 -- # local cpu=1 00:20:58.731 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.731 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.731 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.731 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.731 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 1 powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@388 -- # local cpu=1 00:20:58.731 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.731 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.731 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 0 1000000 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@360 -- # local cpu=0 00:20:58.731 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.731 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.731 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.731 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.731 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 0 powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@388 -- # local cpu=0 00:20:58.731 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.731 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.731 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 2 1000000 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@360 -- # local cpu=2 00:20:58.731 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.731 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.731 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.731 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.731 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 2 powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@388 -- # local cpu=2 00:20:58.731 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.731 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.731 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 3 1000000 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@360 -- # local cpu=3 00:20:58.731 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.731 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.731 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.731 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.731 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 3 powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@388 -- # local cpu=3 00:20:58.731 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.731 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.731 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.731 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 4 1000000 2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@360 -- # local cpu=4 00:20:58.731 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.731 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.731 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq 00:20:58.731 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.731 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.731 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 4 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=4 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 5 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=5 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 5 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=5 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 6 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=6 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 6 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=6 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 7 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=7 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 7 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=7 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 8 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=8 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 8 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=8 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 9 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=9 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 9 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=9 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 10 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=10 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 10 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=10 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 11 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=11 00:20:58.732 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.732 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.732 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.732 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.732 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.732 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 11 powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@388 -- # local cpu=11 00:20:58.732 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.732 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq 00:20:58.732 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.732 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.732 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 12 1000000 2300001 00:20:58.732 20:15:56 -- scheduler/common.sh@360 -- # local cpu=12 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 12 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=12 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 13 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=13 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 13 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=13 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 14 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=14 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 14 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=14 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 15 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=15 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 15 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=15 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 16 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=16 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 16 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=16 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 17 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=17 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 17 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=17 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 36 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=36 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 36 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=36 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.733 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.733 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@360 -- # local cpu=37 00:20:58.733 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.733 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.733 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.733 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.733 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.733 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.733 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@388 -- # local cpu=37 00:20:58.733 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.733 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:20:58.733 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=38 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=38 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=39 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=39 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=40 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=40 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 41 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=41 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 41 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=41 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 42 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=42 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 42 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=42 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 43 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=43 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 43 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=43 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 44 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=44 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 44 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=44 00:20:58.734 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.734 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.734 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 45 1000000 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@360 -- # local cpu=45 00:20:58.734 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.734 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq 00:20:58.734 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.734 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.734 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.734 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.734 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.734 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 45 powersave 00:20:58.734 20:15:56 -- scheduler/common.sh@388 -- # local cpu=45 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 46 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=46 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 46 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=46 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 47 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=47 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 47 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=47 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 48 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=48 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 48 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=48 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 49 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=49 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 49 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=49 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 50 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=50 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 50 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=50 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 51 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=51 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 51 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=51 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 52 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=52 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.735 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.735 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.735 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 52 powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@388 -- # local cpu=52 00:20:58.735 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.735 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.735 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.735 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 53 1000000 2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@360 -- # local cpu=53 00:20:58.735 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.735 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.735 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq 00:20:58.735 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.735 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.735 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 53 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=53 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 18 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=18 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 18 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=18 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=37 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=37 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=38 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=38 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=39 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=39 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=40 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=40 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.736 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.736 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 23 1000000 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@360 -- # local cpu=23 00:20:58.736 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.736 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.736 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.736 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.736 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.736 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.736 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 23 powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@388 -- # local cpu=23 00:20:58.736 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.736 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq 00:20:58.736 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 24 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=24 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 24 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=24 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 25 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=25 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 25 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=25 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 26 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=26 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 26 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=26 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 27 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=27 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 27 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=27 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 28 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=28 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 28 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=28 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 29 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=29 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 29 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=29 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 30 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=30 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 30 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=30 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.737 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.737 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 31 1000000 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@360 -- # local cpu=31 00:20:58.737 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.737 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq 00:20:58.737 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.737 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.737 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.737 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.737 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.737 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 31 powersave 00:20:58.737 20:15:56 -- scheduler/common.sh@388 -- # local cpu=31 00:20:58.737 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 32 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=32 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 32 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=32 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 33 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=33 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 33 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=33 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 34 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=34 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 34 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=34 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 35 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=35 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 35 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=35 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 54 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=54 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 54 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=54 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 55 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=55 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 55 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=55 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 56 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=56 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.738 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 56 powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@388 -- # local cpu=56 00:20:58.738 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.738 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.738 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.738 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 57 1000000 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@360 -- # local cpu=57 00:20:58.738 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.738 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq 00:20:58.738 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.738 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.738 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.738 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.738 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 57 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=57 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.999 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.999 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 58 1000000 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@360 -- # local cpu=58 00:20:58.999 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.999 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.999 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.999 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 58 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=58 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.999 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.999 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 59 1000000 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@360 -- # local cpu=59 00:20:58.999 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.999 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.999 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.999 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 59 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=59 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.999 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.999 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 60 1000000 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@360 -- # local cpu=60 00:20:58.999 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.999 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.999 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.999 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 60 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=60 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.999 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.999 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 61 1000000 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@360 -- # local cpu=61 00:20:58.999 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.999 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.999 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.999 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 61 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=61 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:58.999 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:58.999 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 62 1000000 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@360 -- # local cpu=62 00:20:58.999 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:58.999 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq 00:20:58.999 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:58.999 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:58.999 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:58.999 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:58.999 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:58.999 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 62 powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@388 -- # local cpu=62 00:20:58.999 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:58.999 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 63 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=63 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 63 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=63 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 64 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=64 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 64 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=64 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 65 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=65 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 65 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=65 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 66 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=66 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 66 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=66 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 67 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=67 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 67 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=67 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 68 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=68 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 68 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=68 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 69 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=69 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 69 powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@388 -- # local cpu=69 00:20:59.000 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.000 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.000 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.000 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 70 1000000 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@360 -- # local cpu=70 00:20:59.000 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.000 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq 00:20:59.000 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.000 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.000 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.000 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.000 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.000 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 70 powersave 00:20:59.001 20:15:56 -- scheduler/common.sh@388 -- # local cpu=70 00:20:59.001 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.001 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq 00:20:59.001 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.001 20:15:56 -- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}" 00:20:59.001 20:15:56 -- scheduler/governor.sh@18 -- # set_cpufreq 71 1000000 2300001 00:20:59.001 20:15:56 -- scheduler/common.sh@360 -- # local cpu=71 00:20:59.001 20:15:56 -- scheduler/common.sh@361 -- # local min_freq=1000000 00:20:59.001 20:15:56 -- scheduler/common.sh@362 -- # local max_freq=2300001 00:20:59.001 20:15:56 -- scheduler/common.sh@363 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq 00:20:59.001 20:15:56 -- scheduler/common.sh@366 -- # [[ -n intel_pstate ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@367 -- # [[ -n 1000000 ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@369 -- # case "${cpufreq_drivers[cpu]}" in 00:20:59.001 20:15:56 -- scheduler/common.sh@377 -- # [[ -n 2300001 ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@377 -- # (( max_freq >= min_freq )) 00:20:59.001 20:15:56 -- scheduler/common.sh@378 -- # echo 2300001 00:20:59.001 20:15:56 -- scheduler/common.sh@380 -- # (( min_freq <= cpufreq_max_freqs[cpu] )) 00:20:59.001 20:15:56 -- scheduler/common.sh@381 -- # echo 1000000 00:20:59.001 20:15:56 -- scheduler/governor.sh@19 -- # set_cpufreq_governor 71 powersave 00:20:59.001 20:15:56 -- scheduler/common.sh@388 -- # local cpu=71 00:20:59.001 20:15:56 -- scheduler/common.sh@389 -- # local governor=powersave 00:20:59.001 20:15:56 -- scheduler/common.sh@390 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq 00:20:59.001 20:15:56 -- scheduler/common.sh@392 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]] 00:20:59.001 00:20:59.001 real 0m19.327s 00:20:59.001 user 0m31.211s 00:20:59.001 sys 0m8.090s 00:20:59.001 20:15:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:59.001 20:15:56 -- common/autotest_common.sh@10 -- # set +x 00:20:59.001 ************************************ 00:20:59.001 END TEST dpdk_governor 00:20:59.001 ************************************ 00:20:59.001 20:15:56 -- scheduler/scheduler.sh@17 -- # run_test interrupt_mode /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh 00:20:59.001 20:15:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:59.001 20:15:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:59.001 20:15:56 -- common/autotest_common.sh@10 -- # set +x 00:20:59.001 ************************************ 00:20:59.001 START TEST interrupt_mode 00:20:59.001 ************************************ 00:20:59.001 20:15:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh 00:20:59.001 * Looking for test storage... 00:20:59.001 * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh 00:20:59.001 20:15:56 -- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system 00:20:59.001 20:15:56 -- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu 00:20:59.001 20:15:56 -- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node 00:20:59.001 20:15:56 -- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler 00:20:59.001 20:15:56 -- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin 00:20:59.001 20:15:56 -- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@256 -- # declare -r sysfs_cgroup=/sys/fs/cgroup 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@257 -- # check_cgroup 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@10 -- # echo 2 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@257 -- # cgroup_version=2 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@12 -- # trap 'killprocess "$spdk_pid"' EXIT 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@14 -- # cpus=() 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@14 -- # declare -a cpus 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@15 -- # cpus_to_collect=() 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@15 -- # declare -a cpus_to_collect 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@17 -- # parse_cpu_list /dev/fd/62 00:20:59.001 20:15:56 -- scheduler/common.sh@34 -- # local list=/dev/fd/62 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@17 -- # echo 1,2,3,4,37,38,39,40 00:20:59.001 20:15:56 -- scheduler/common.sh@35 -- # local elem elems cpus 00:20:59.001 20:15:56 -- scheduler/common.sh@38 -- # IFS=, 00:20:59.001 20:15:56 -- scheduler/common.sh@38 -- # read -ra elems 00:20:59.001 20:15:56 -- scheduler/common.sh@40 -- # (( 8 > 0 )) 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 1 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=1 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 2 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=2 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 3 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=3 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 4 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=4 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 37 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=37 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 38 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=38 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 39 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=39 00:20:59.001 20:15:56 -- scheduler/common.sh@42 -- # for elem in "${elems[@]}" 00:20:59.001 20:15:56 -- scheduler/common.sh@43 -- # [[ 40 == *-* ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@49 -- # cpus[elem]=40 00:20:59.001 20:15:56 -- scheduler/common.sh@52 -- # printf '%u\n' 1 2 3 4 37 38 39 40 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@17 -- # fold_list_onto_array cpus 1 2 3 4 37 38 39 40 00:20:59.001 20:15:56 -- scheduler/common.sh@16 -- # local array=cpus 00:20:59.001 20:15:56 -- scheduler/common.sh@17 -- # local elem 00:20:59.001 20:15:56 -- scheduler/common.sh@19 -- # shift 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=1' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=1 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=2' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=2 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=3' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=3 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=4' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=4 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=37' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=37 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=38' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=38 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=39' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=39 00:20:59.001 20:15:56 -- scheduler/common.sh@21 -- # for elem in "$@" 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # eval 'cpus[elem]=40' 00:20:59.001 20:15:56 -- scheduler/common.sh@22 -- # cpus[elem]=40 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@19 -- # cpus=("${cpus[@]}") 00:20:59.001 20:15:56 -- scheduler/interrupt.sh@78 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 00:20:59.001 20:15:56 -- scheduler/common.sh@398 -- # [[ -e /proc//status ]] 00:20:59.001 20:15:56 -- scheduler/common.sh@402 -- # spdk_pid=2185987 00:20:59.001 20:15:56 -- scheduler/common.sh@404 -- # waitforlisten 2185987 00:20:59.001 20:15:56 -- scheduler/common.sh@401 -- # exec_in_cgroup /cpuset/spdk /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:20:59.001 20:15:56 -- common/autotest_common.sh@819 -- # '[' -z 2185987 ']' 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@134 -- # local cgroup=/cpuset/spdk 00:20:59.001 20:15:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@135 -- # local proc_interface=cgroup.procs 00:20:59.001 20:15:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:59.001 20:15:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@137 -- # shift 00:20:59.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.001 20:15:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@139 -- # (( cgroup_version == 2 )) 00:20:59.001 20:15:56 -- common/autotest_common.sh@10 -- # set +x 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@139 -- # is_cgroup_threaded /cpuset/spdk 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@49 -- # [[ -e /sys/fs/cgroup//cpuset/spdk/cgroup.type ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@50 -- # [[ threaded == threaded ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@140 -- # proc_interface=cgroup.threads 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@142 -- # set_cgroup_attr /cpuset/spdk cgroup.threads 2185987 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@101 -- # local cgroup=/cpuset/spdk 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@102 -- # local attr=cgroup.threads 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@103 -- # local val=2185987 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@105 -- # [[ -e /sys/fs/cgroup//cpuset/spdk/cgroup.threads ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@107 -- # [[ -n 2185987 ]] 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@108 -- # echo 2185987 00:20:59.001 20:15:56 -- scheduler/cgroups.sh@143 -- # exec /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc 00:20:59.260 [2024-04-25 20:15:56.940098] Starting SPDK v24.01.1-pre git sha1 36faa8c31 / DPDK 23.11.0 initialization... 00:20:59.260 [2024-04-25 20:15:56.940177] [ DPDK EAL parameters: scheduler --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185987 ] 00:20:59.260 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.260 [2024-04-25 20:15:57.032706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8 00:20:59.260 [2024-04-25 20:15:57.133109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.260 [2024-04-25 20:15:57.133207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.260 [2024-04-25 20:15:57.133303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:59.260 [2024-04-25 20:15:57.133322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37 00:20:59.260 [2024-04-25 20:15:57.133362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38 00:20:59.260 [2024-04-25 20:15:57.133399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39 00:20:59.260 [2024-04-25 20:15:57.133438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.260 [2024-04-25 20:15:57.133439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40 00:21:00.196 20:15:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.197 20:15:57 -- common/autotest_common.sh@852 -- # return 0 00:21:00.197 20:15:57 -- scheduler/common.sh@405 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic 00:21:00.197 POWER: Env isn't set yet! 00:21:00.197 POWER: Attempting to initialise ACPI cpufreq power management... 00:21:00.197 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:00.197 POWER: Cannot set governor of lcore 1 to userspace 00:21:00.197 POWER: Attempting to initialise PSTAT power management... 00:21:00.197 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 1 power management 00:21:00.197 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 2 power management 00:21:00.197 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 3 power management 00:21:00.197 POWER: Power management governor of lcore 4 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 4 power management 00:21:00.197 POWER: Power management governor of lcore 37 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 37 power management 00:21:00.197 POWER: Power management governor of lcore 38 has been set to 'performance' successfully 00:21:00.197 POWER: Initialized successfully for lcore 38 power management 00:21:00.456 POWER: Power management governor of lcore 39 has been set to 'performance' successfully 00:21:00.456 POWER: Initialized successfully for lcore 39 power management 00:21:00.456 POWER: Power management governor of lcore 40 has been set to 'performance' successfully 00:21:00.456 POWER: Initialized successfully for lcore 40 power management 00:21:00.456 20:15:58 -- scheduler/common.sh@406 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:00.716 [2024-04-25 20:15:58.535452] 'OCF_Core' volume operations registered 00:21:00.716 [2024-04-25 20:15:58.538599] 'OCF_Cache' volume operations registered 00:21:00.716 [2024-04-25 20:15:58.542133] 'OCF Composite' volume operations registered 00:21:00.716 [2024-04-25 20:15:58.545291] 'SPDK_block_device' volume operations registered 00:21:00.716 [2024-04-25 20:15:58.546212] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@80 -- # interrupt 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@22 -- # local busy_cpus 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@23 -- # local cpu thread 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@25 -- # local reactor_framework 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@27 -- # cpus_to_collect=("${cpus[@]}") 00:21:00.716 20:15:58 -- scheduler/interrupt.sh@28 -- # collect_cpu_idle 00:21:00.716 20:15:58 -- scheduler/common.sh@619 -- # (( 8 > 0 )) 00:21:00.716 20:15:58 -- scheduler/common.sh@621 -- # local time=5 00:21:00.716 20:15:58 -- scheduler/common.sh@622 -- # local cpu 00:21:00.716 20:15:58 -- scheduler/common.sh@623 -- # local samples 00:21:00.716 20:15:58 -- scheduler/common.sh@624 -- # is_idle=() 00:21:00.716 20:15:58 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:00.716 20:15:58 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' '1 2 3 4 37 38 39 40' 5 00:21:00.716 Collecting cpu idle stats (cpus: 1 2 3 4 37 38 39 40) for 5 seconds... 00:21:00.716 20:15:58 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 1 2 3 4 37 38 39 40 00:21:00.716 20:15:58 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:00.716 20:15:58 -- common/autotest_common.sh@10 -- # set +x 00:21:07.280 20:16:04 -- scheduler/common.sh@631 -- # local user_load 00:21:07.280 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.280 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.280 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 1 '0 0 0 0 0' 0 00:21:07.280 * cpu1 idle samples: 0 0 0 0 0 (avg: 0%) 00:21:07.280 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 1 user 00:21:07.280 20:16:04 -- scheduler/common.sh@659 -- # local cpu=1 time=user 00:21:07.280 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.280 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_1 ]] 00:21:07.280 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_1 00:21:07.280 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.280 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.280 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.280 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.280 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.280 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.280 20:16:04 -- scheduler/common.sh@678 -- # usage=102 00:21:07.280 20:16:04 -- scheduler/common.sh@679 -- # usage=100 00:21:07.280 20:16:04 -- scheduler/common.sh@681 -- # printf %u 100 00:21:07.281 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 1 user 100 00:21:07.281 * cpu1 user usage: 100 00:21:07.281 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 1 '275613 275715 275817 275919 276021' 00:21:07.281 * cpu1 user samples: 275613 275715 275817 275919 276021 00:21:07.281 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 1 '1334 1334 1334 1334 1334' 00:21:07.281 * cpu1 nice samples: 1334 1334 1334 1334 1334 00:21:07.281 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 1 '12674 12674 12675 12675 12675' 00:21:07.281 * cpu1 system samples: 12674 12674 12675 12675 12675 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # user_load=100 00:21:07.281 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.281 20:16:04 -- scheduler/common.sh@648 -- # (( user_load <= 15 )) 00:21:07.281 20:16:04 -- scheduler/common.sh@652 -- # printf '* cpu%u is not idle\n' 1 00:21:07.281 * cpu1 is not idle 00:21:07.281 20:16:04 -- scheduler/common.sh@653 -- # is_idle[cpu]=0 00:21:07.281 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.281 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.281 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 2 '100 99 100 100 100' 99 00:21:07.281 * cpu2 idle samples: 100 99 100 100 100 (avg: 99%) 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 2 user 00:21:07.281 20:16:04 -- scheduler/common.sh@659 -- # local cpu=2 time=user 00:21:07.281 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.281 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_2 ]] 00:21:07.281 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_2 00:21:07.281 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.281 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 2 user 0 00:21:07.281 * cpu2 user usage: 0 00:21:07.281 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 2 '164591 164592 164592 164592 164592' 00:21:07.281 * cpu2 user samples: 164591 164592 164592 164592 164592 00:21:07.281 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0' 00:21:07.281 * cpu2 nice samples: 0 0 0 0 0 00:21:07.281 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 2 '10453 10453 10453 10453 10453' 00:21:07.281 * cpu2 system samples: 10453 10453 10453 10453 10453 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.281 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.281 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 2 00:21:07.281 * cpu2 is idle 00:21:07.281 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.281 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.281 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.281 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 3 '100 99 100 100 100' 99 00:21:07.281 * cpu3 idle samples: 100 99 100 100 100 (avg: 99%) 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 3 user 00:21:07.281 20:16:04 -- scheduler/common.sh@659 -- # local cpu=3 time=user 00:21:07.281 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.281 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_3 ]] 00:21:07.281 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_3 00:21:07.281 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.281 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 3 user 0 00:21:07.281 * cpu3 user usage: 0 00:21:07.281 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 3 '146147 146148 146148 146148 146148' 00:21:07.281 * cpu3 user samples: 146147 146148 146148 146148 146148 00:21:07.281 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 3 '17 17 17 17 17' 00:21:07.281 * cpu3 nice samples: 17 17 17 17 17 00:21:07.281 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 3 '10060 10060 10060 10060 10060' 00:21:07.281 * cpu3 system samples: 10060 10060 10060 10060 10060 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.281 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.281 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 3 00:21:07.281 * cpu3 is idle 00:21:07.281 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.281 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.281 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.281 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 4 '100 100 100 100 100' 100 00:21:07.281 * cpu4 idle samples: 100 100 100 100 100 (avg: 100%) 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 4 user 00:21:07.281 20:16:04 -- scheduler/common.sh@659 -- # local cpu=4 time=user 00:21:07.281 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.281 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_4 ]] 00:21:07.281 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_4 00:21:07.281 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.281 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 4 user 0 00:21:07.281 * cpu4 user usage: 0 00:21:07.281 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 4 '45979 45979 45979 45979 45979' 00:21:07.281 * cpu4 user samples: 45979 45979 45979 45979 45979 00:21:07.281 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0' 00:21:07.281 * cpu4 nice samples: 0 0 0 0 0 00:21:07.281 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 4 '10277 10277 10277 10277 10277' 00:21:07.281 * cpu4 system samples: 10277 10277 10277 10277 10277 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.281 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.281 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 4 00:21:07.281 * cpu4 is idle 00:21:07.281 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.281 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.281 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.281 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 37 '100 99 100 100 100' 99 00:21:07.281 * cpu37 idle samples: 100 99 100 100 100 (avg: 99%) 00:21:07.281 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 37 user 00:21:07.281 20:16:04 -- scheduler/common.sh@659 -- # local cpu=37 time=user 00:21:07.281 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.281 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_37 ]] 00:21:07.281 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_37 00:21:07.281 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.281 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.281 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.281 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.281 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.281 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.281 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 37 user 0 00:21:07.281 * cpu37 user usage: 0 00:21:07.281 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 37 '11788 11789 11789 11789 11789' 00:21:07.281 * cpu37 user samples: 11788 11789 11789 11789 11789 00:21:07.281 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 37 '0 0 0 0 0' 00:21:07.281 * cpu37 nice samples: 0 0 0 0 0 00:21:07.281 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 37 '3616 3616 3616 3616 3616' 00:21:07.281 * cpu37 system samples: 3616 3616 3616 3616 3616 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.282 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.282 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 37 00:21:07.282 * cpu37 is idle 00:21:07.282 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.282 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.282 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.282 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 38 '100 100 100 100 100' 100 00:21:07.282 * cpu38 idle samples: 100 100 100 100 100 (avg: 100%) 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 38 user 00:21:07.282 20:16:04 -- scheduler/common.sh@659 -- # local cpu=38 time=user 00:21:07.282 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.282 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_38 ]] 00:21:07.282 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_38 00:21:07.282 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.282 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 38 user 0 00:21:07.282 * cpu38 user usage: 0 00:21:07.282 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 38 '10730 10730 10730 10730 10730' 00:21:07.282 * cpu38 user samples: 10730 10730 10730 10730 10730 00:21:07.282 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 38 '0 0 0 0 0' 00:21:07.282 * cpu38 nice samples: 0 0 0 0 0 00:21:07.282 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 38 '3258 3258 3258 3258 3258' 00:21:07.282 * cpu38 system samples: 3258 3258 3258 3258 3258 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.282 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.282 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 38 00:21:07.282 * cpu38 is idle 00:21:07.282 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.282 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.282 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.282 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 39 '100 100 100 100 100' 100 00:21:07.282 * cpu39 idle samples: 100 100 100 100 100 (avg: 100%) 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 39 user 00:21:07.282 20:16:04 -- scheduler/common.sh@659 -- # local cpu=39 time=user 00:21:07.282 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.282 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_39 ]] 00:21:07.282 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_39 00:21:07.282 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.282 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 39 user 0 00:21:07.282 * cpu39 user usage: 0 00:21:07.282 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 39 '12402 12402 12402 12402 12402' 00:21:07.282 * cpu39 user samples: 12402 12402 12402 12402 12402 00:21:07.282 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 39 '0 0 0 0 0' 00:21:07.282 * cpu39 nice samples: 0 0 0 0 0 00:21:07.282 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 39 '4570 4570 4570 4570 4570' 00:21:07.282 * cpu39 system samples: 4570 4570 4570 4570 4570 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.282 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.282 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 39 00:21:07.282 * cpu39 is idle 00:21:07.282 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.282 20:16:04 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:07.282 20:16:04 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:07.282 20:16:04 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 40 '100 100 100 100 100' 100 00:21:07.282 * cpu40 idle samples: 100 100 100 100 100 (avg: 100%) 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 40 user 00:21:07.282 20:16:04 -- scheduler/common.sh@659 -- # local cpu=40 time=user 00:21:07.282 20:16:04 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:07.282 20:16:04 -- scheduler/common.sh@663 -- # [[ -v raw_samples_40 ]] 00:21:07.282 20:16:04 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_40 00:21:07.282 20:16:04 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:07.282 20:16:04 -- scheduler/common.sh@671 -- # case "$time" in 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # trap - ERR 00:21:07.282 20:16:04 -- scheduler/common.sh@672 -- # print_backtrace 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:07.282 20:16:04 -- common/autotest_common.sh@1132 -- # return 0 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:07.282 20:16:04 -- scheduler/common.sh@678 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@679 -- # usage=0 00:21:07.282 20:16:04 -- scheduler/common.sh@681 -- # printf %u 0 00:21:07.282 20:16:04 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 40 user 0 00:21:07.282 * cpu40 user usage: 0 00:21:07.282 20:16:04 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 40 '17104 17104 17104 17104 17104' 00:21:07.282 * cpu40 user samples: 17104 17104 17104 17104 17104 00:21:07.282 20:16:04 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 40 '0 0 0 0 0' 00:21:07.282 * cpu40 nice samples: 0 0 0 0 0 00:21:07.282 20:16:04 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 40 '4852 4852 4852 4852 4852' 00:21:07.282 * cpu40 system samples: 4852 4852 4852 4852 4852 00:21:07.282 20:16:04 -- scheduler/common.sh@644 -- # user_load=0 00:21:07.282 20:16:04 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:07.282 20:16:04 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 40 00:21:07.282 * cpu40 is idle 00:21:07.282 20:16:04 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:07.282 20:16:04 -- scheduler/interrupt.sh@31 -- # rpc_cmd framework_get_reactors 00:21:07.282 20:16:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.282 20:16:04 -- scheduler/interrupt.sh@31 -- # jq -r '.reactors[]' 00:21:07.282 20:16:04 -- common/autotest_common.sh@10 -- # set +x 00:21:07.282 20:16:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.282 20:16:04 -- scheduler/interrupt.sh@31 -- # reactor_framework='{ 00:21:07.282 "lcore": 1, 00:21:07.282 "busy": 581806658, 00:21:07.282 "idle": 17159979710, 00:21:07.282 "in_interrupt": false, 00:21:07.282 "core_freq": 1600, 00:21:07.282 "lw_threads": [ 00:21:07.282 { 00:21:07.282 "name": "app_thread", 00:21:07.282 "id": 1, 00:21:07.282 "cpumask": "2", 00:21:07.282 "elapsed": 17760938382 00:21:07.282 } 00:21:07.282 ] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 2, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2305846506, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 3, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2305506962, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 4, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2305494414, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 37, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2305849582, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 38, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2306099682, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 39, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2306327486, 00:21:07.282 "in_interrupt": true, 00:21:07.282 "core_freq": 2300, 00:21:07.282 "lw_threads": [] 00:21:07.282 } 00:21:07.282 { 00:21:07.282 "lcore": 40, 00:21:07.282 "busy": 0, 00:21:07.282 "idle": 2306516116, 00:21:07.282 "in_interrupt": true, 00:21:07.283 "core_freq": 2300, 00:21:07.283 "lw_threads": [] 00:21:07.283 }' 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id' 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id' 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:04 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id' 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 37) | .lw_threads[].id' 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 38) | .lw_threads[].id' 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 39) | .lw_threads[].id' 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 40) | .lw_threads[].id' 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@33 -- # [[ -z '' ]] 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@41 -- # (( is_idle[cpu] == 0 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 )) 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@49 -- # busy_cpus=("${cpus[@]:1:3}") 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@49 -- # threads=() 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@54 -- # mask_cpus 2 00:21:07.283 20:16:05 -- scheduler/common.sh@166 -- # fold_array_onto_string 2 00:21:07.283 20:16:05 -- scheduler/common.sh@27 -- # cpus=('2') 00:21:07.283 20:16:05 -- scheduler/common.sh@27 -- # local cpus 00:21:07.283 20:16:05 -- scheduler/common.sh@29 -- # local IFS=, 00:21:07.283 20:16:05 -- scheduler/common.sh@30 -- # echo 2 00:21:07.283 20:16:05 -- scheduler/common.sh@166 -- # printf '[%s]\n' 2 00:21:07.283 20:16:05 -- scheduler/interrupt.sh@54 -- # create_thread -n thread2 -m '[2]' -a 100 00:21:07.283 20:16:05 -- scheduler/common.sh@464 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread2 -m '[2]' -a 100 00:21:07.283 20:16:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:07.283 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:07.542 20:16:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:07.542 20:16:05 -- scheduler/interrupt.sh@54 -- # threads[cpu]=2 00:21:07.542 20:16:05 -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:21:07.542 20:16:05 -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:21:07.542 20:16:05 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:07.542 20:16:05 -- scheduler/common.sh@621 -- # local time=5 00:21:07.542 20:16:05 -- scheduler/common.sh@622 -- # local cpu 00:21:07.542 20:16:05 -- scheduler/common.sh@623 -- # local samples 00:21:07.542 20:16:05 -- scheduler/common.sh@624 -- # is_idle=() 00:21:07.542 20:16:05 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:07.542 20:16:05 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5 00:21:07.542 Collecting cpu idle stats (cpus: 2) for 5 seconds... 00:21:07.542 20:16:05 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 2 00:21:07.542 20:16:05 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:07.542 20:16:05 -- common/autotest_common.sh@10 -- # set +x 00:21:14.107 20:16:11 -- scheduler/common.sh@631 -- # local user_load 00:21:14.107 20:16:11 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:14.107 20:16:11 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:14.107 20:16:11 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 2 '100 6 0 0 0' 21 00:21:14.107 * cpu2 idle samples: 100 6 0 0 0 (avg: 21%) 00:21:14.107 20:16:11 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 2 user 00:21:14.107 20:16:11 -- scheduler/common.sh@659 -- # local cpu=2 time=user 00:21:14.107 20:16:11 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:14.107 20:16:11 -- scheduler/common.sh@663 -- # [[ -v raw_samples_2 ]] 00:21:14.107 20:16:11 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_2 00:21:14.107 20:16:11 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:14.107 20:16:11 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:14.107 20:16:11 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:14.107 20:16:11 -- scheduler/common.sh@671 -- # case "$time" in 00:21:14.107 20:16:11 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:14.107 20:16:11 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:14.107 20:16:11 -- scheduler/common.sh@678 -- # usage=102 00:21:14.107 20:16:11 -- scheduler/common.sh@679 -- # usage=100 00:21:14.107 20:16:11 -- scheduler/common.sh@681 -- # printf %u 100 00:21:14.107 20:16:11 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 2 user 100 00:21:14.107 * cpu2 user usage: 100 00:21:14.108 20:16:11 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 2 '164592 164686 164788 164889 164991' 00:21:14.108 * cpu2 user samples: 164592 164686 164788 164889 164991 00:21:14.108 20:16:11 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0' 00:21:14.108 * cpu2 nice samples: 0 0 0 0 0 00:21:14.108 20:16:11 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 2 '10453 10453 10453 10453 10453' 00:21:14.108 * cpu2 system samples: 10453 10453 10453 10453 10453 00:21:14.108 20:16:11 -- scheduler/common.sh@644 -- # user_load=100 00:21:14.108 20:16:11 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:14.108 20:16:11 -- scheduler/common.sh@648 -- # (( user_load <= 15 )) 00:21:14.108 20:16:11 -- scheduler/common.sh@652 -- # printf '* cpu%u is not idle\n' 2 00:21:14.108 * cpu2 is not idle 00:21:14.108 20:16:11 -- scheduler/common.sh@653 -- # is_idle[cpu]=0 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:21:14.108 20:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.108 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:14.108 20:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:21:14.108 "lcore": 1, 00:21:14.108 "busy": 3587909030, 00:21:14.108 "idle": 29052399006, 00:21:14.108 "in_interrupt": false, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [ 00:21:14.108 { 00:21:14.108 "name": "app_thread", 00:21:14.108 "id": 1, 00:21:14.108 "cpumask": "2", 00:21:14.108 "elapsed": 32659495894 00:21:14.108 } 00:21:14.108 ] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 2, 00:21:14.108 "busy": 11041950286, 00:21:14.108 "idle": 2995903816, 00:21:14.108 "in_interrupt": false, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [ 00:21:14.108 { 00:21:14.108 "name": "thread2", 00:21:14.108 "id": 2, 00:21:14.108 "cpumask": "4", 00:21:14.108 "elapsed": 10875534898 00:21:14.108 } 00:21:14.108 ] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 3, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2305506962, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 4, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2305494414, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 37, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2305849582, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 38, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2306099682, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 39, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2306327486, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 } 00:21:14.108 { 00:21:14.108 "lcore": 40, 00:21:14.108 "busy": 0, 00:21:14.108 "idle": 2306516116, 00:21:14.108 "in_interrupt": true, 00:21:14.108 "core_freq": 2300, 00:21:14.108 "lw_threads": [] 00:21:14.108 }' 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 2) | .lw_threads[] | select(.name == "thread2")' 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@57 -- # [[ -n { 00:21:14.108 "name": "thread2", 00:21:14.108 "id": 2, 00:21:14.108 "cpumask": "4", 00:21:14.108 "elapsed": 10875534898 00:21:14.108 } ]] 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@54 -- # mask_cpus 3 00:21:14.108 20:16:11 -- scheduler/common.sh@166 -- # fold_array_onto_string 3 00:21:14.108 20:16:11 -- scheduler/common.sh@27 -- # cpus=('3') 00:21:14.108 20:16:11 -- scheduler/common.sh@27 -- # local cpus 00:21:14.108 20:16:11 -- scheduler/common.sh@29 -- # local IFS=, 00:21:14.108 20:16:11 -- scheduler/common.sh@30 -- # echo 3 00:21:14.108 20:16:11 -- scheduler/common.sh@166 -- # printf '[%s]\n' 3 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@54 -- # create_thread -n thread3 -m '[3]' -a 100 00:21:14.108 20:16:11 -- scheduler/common.sh@464 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread3 -m '[3]' -a 100 00:21:14.108 20:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:14.108 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:14.108 20:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@54 -- # threads[cpu]=3 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:21:14.108 20:16:11 -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:21:14.108 20:16:11 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:14.108 20:16:11 -- scheduler/common.sh@621 -- # local time=5 00:21:14.108 20:16:11 -- scheduler/common.sh@622 -- # local cpu 00:21:14.108 20:16:11 -- scheduler/common.sh@623 -- # local samples 00:21:14.108 20:16:11 -- scheduler/common.sh@624 -- # is_idle=() 00:21:14.108 20:16:11 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:14.108 20:16:11 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5 00:21:14.108 Collecting cpu idle stats (cpus: 3) for 5 seconds... 00:21:14.108 20:16:11 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 3 00:21:14.108 20:16:11 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:14.108 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 20:16:17 -- scheduler/common.sh@631 -- # local user_load 00:21:20.671 20:16:17 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:20.671 20:16:17 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:20.671 20:16:17 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 3 '56 0 0 0 0' 11 00:21:20.671 * cpu3 idle samples: 56 0 0 0 0 (avg: 11%) 00:21:20.671 20:16:17 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 3 user 00:21:20.671 20:16:17 -- scheduler/common.sh@659 -- # local cpu=3 time=user 00:21:20.671 20:16:17 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:20.671 20:16:17 -- scheduler/common.sh@663 -- # [[ -v raw_samples_3 ]] 00:21:20.671 20:16:17 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_3 00:21:20.671 20:16:17 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:20.671 20:16:17 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:20.671 20:16:17 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:20.671 20:16:17 -- scheduler/common.sh@671 -- # case "$time" in 00:21:20.671 20:16:17 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:20.671 20:16:17 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:20.671 20:16:17 -- scheduler/common.sh@678 -- # usage=101 00:21:20.671 20:16:17 -- scheduler/common.sh@679 -- # usage=100 00:21:20.671 20:16:17 -- scheduler/common.sh@681 -- # printf %u 100 00:21:20.671 20:16:17 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 3 user 100 00:21:20.671 * cpu3 user usage: 100 00:21:20.671 20:16:17 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 3 '146192 146293 146395 146496 146597' 00:21:20.671 * cpu3 user samples: 146192 146293 146395 146496 146597 00:21:20.671 20:16:17 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 3 '17 17 17 17 17' 00:21:20.671 * cpu3 nice samples: 17 17 17 17 17 00:21:20.671 20:16:17 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 3 '10060 10060 10060 10060 10060' 00:21:20.671 * cpu3 system samples: 10060 10060 10060 10060 10060 00:21:20.671 20:16:17 -- scheduler/common.sh@644 -- # user_load=100 00:21:20.671 20:16:17 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:20.671 20:16:17 -- scheduler/common.sh@648 -- # (( user_load <= 15 )) 00:21:20.671 20:16:17 -- scheduler/common.sh@652 -- # printf '* cpu%u is not idle\n' 3 00:21:20.671 * cpu3 is not idle 00:21:20.671 20:16:17 -- scheduler/common.sh@653 -- # is_idle[cpu]=0 00:21:20.671 20:16:17 -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:21:20.671 20:16:17 -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:21:20.671 20:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.671 20:16:17 -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 20:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.671 20:16:17 -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:21:20.671 "lcore": 1, 00:21:20.671 "busy": 3605678800, 00:21:20.671 "idle": 43756297496, 00:21:20.671 "in_interrupt": false, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [ 00:21:20.671 { 00:21:20.671 "name": "app_thread", 00:21:20.671 "id": 1, 00:21:20.671 "cpumask": "2", 00:21:20.671 "elapsed": 47381167244 00:21:20.671 } 00:21:20.671 ] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 2, 00:21:20.671 "busy": 25764853052, 00:21:20.671 "idle": 2995903816, 00:21:20.671 "in_interrupt": false, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [ 00:21:20.671 { 00:21:20.671 "name": "thread2", 00:21:20.671 "id": 2, 00:21:20.671 "cpumask": "4", 00:21:20.671 "elapsed": 25597206248 00:21:20.671 } 00:21:20.671 ] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 3, 00:21:20.671 "busy": 12192139210, 00:21:20.671 "idle": 3225138366, 00:21:20.671 "in_interrupt": false, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [ 00:21:20.671 { 00:21:20.671 "name": "thread3", 00:21:20.671 "id": 3, 00:21:20.671 "cpumask": "8", 00:21:20.671 "elapsed": 11794417654 00:21:20.671 } 00:21:20.671 ] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 4, 00:21:20.671 "busy": 0, 00:21:20.671 "idle": 2305494414, 00:21:20.671 "in_interrupt": true, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 37, 00:21:20.671 "busy": 0, 00:21:20.671 "idle": 2305849582, 00:21:20.671 "in_interrupt": true, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 38, 00:21:20.671 "busy": 0, 00:21:20.671 "idle": 2306099682, 00:21:20.671 "in_interrupt": true, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 39, 00:21:20.671 "busy": 0, 00:21:20.671 "idle": 2306327486, 00:21:20.671 "in_interrupt": true, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [] 00:21:20.671 } 00:21:20.671 { 00:21:20.671 "lcore": 40, 00:21:20.671 "busy": 0, 00:21:20.671 "idle": 2306516116, 00:21:20.671 "in_interrupt": true, 00:21:20.671 "core_freq": 2300, 00:21:20.671 "lw_threads": [] 00:21:20.671 }' 00:21:20.671 20:16:17 -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 3) | .lw_threads[] | select(.name == "thread3")' 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@57 -- # [[ -n { 00:21:20.671 "name": "thread3", 00:21:20.671 "id": 3, 00:21:20.671 "cpumask": "8", 00:21:20.671 "elapsed": 11794417654 00:21:20.671 } ]] 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}" 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@54 -- # mask_cpus 4 00:21:20.671 20:16:18 -- scheduler/common.sh@166 -- # fold_array_onto_string 4 00:21:20.671 20:16:18 -- scheduler/common.sh@27 -- # cpus=('4') 00:21:20.671 20:16:18 -- scheduler/common.sh@27 -- # local cpus 00:21:20.671 20:16:18 -- scheduler/common.sh@29 -- # local IFS=, 00:21:20.671 20:16:18 -- scheduler/common.sh@30 -- # echo 4 00:21:20.671 20:16:18 -- scheduler/common.sh@166 -- # printf '[%s]\n' 4 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@54 -- # create_thread -n thread4 -m '[4]' -a 100 00:21:20.671 20:16:18 -- scheduler/common.sh@464 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread4 -m '[4]' -a 100 00:21:20.671 20:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.671 20:16:18 -- common/autotest_common.sh@10 -- # set +x 00:21:20.671 20:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@54 -- # threads[cpu]=4 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu") 00:21:20.671 20:16:18 -- scheduler/interrupt.sh@55 -- # collect_cpu_idle 00:21:20.671 20:16:18 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:20.671 20:16:18 -- scheduler/common.sh@621 -- # local time=5 00:21:20.671 20:16:18 -- scheduler/common.sh@622 -- # local cpu 00:21:20.671 20:16:18 -- scheduler/common.sh@623 -- # local samples 00:21:20.671 20:16:18 -- scheduler/common.sh@624 -- # is_idle=() 00:21:20.671 20:16:18 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:20.672 20:16:18 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5 00:21:20.672 Collecting cpu idle stats (cpus: 4) for 5 seconds... 00:21:20.672 20:16:18 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 4 00:21:20.672 20:16:18 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:20.672 20:16:18 -- common/autotest_common.sh@10 -- # set +x 00:21:27.244 20:16:24 -- scheduler/common.sh@631 -- # local user_load 00:21:27.244 20:16:24 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:27.244 20:16:24 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:27.244 20:16:24 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 4 '17 0 0 0 0' 3 00:21:27.244 * cpu4 idle samples: 17 0 0 0 0 (avg: 3%) 00:21:27.244 20:16:24 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 4 user 00:21:27.244 20:16:24 -- scheduler/common.sh@659 -- # local cpu=4 time=user 00:21:27.244 20:16:24 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:27.244 20:16:24 -- scheduler/common.sh@663 -- # [[ -v raw_samples_4 ]] 00:21:27.244 20:16:24 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_4 00:21:27.244 20:16:24 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:27.244 20:16:24 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:27.244 20:16:24 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:27.244 20:16:24 -- scheduler/common.sh@671 -- # case "$time" in 00:21:27.244 20:16:24 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:27.244 20:16:24 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:27.244 20:16:24 -- scheduler/common.sh@678 -- # usage=102 00:21:27.244 20:16:24 -- scheduler/common.sh@679 -- # usage=100 00:21:27.244 20:16:24 -- scheduler/common.sh@681 -- # printf %u 100 00:21:27.244 20:16:24 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 4 user 100 00:21:27.244 * cpu4 user usage: 100 00:21:27.244 20:16:24 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 4 '46063 46164 46266 46367 46469' 00:21:27.244 * cpu4 user samples: 46063 46164 46266 46367 46469 00:21:27.244 20:16:24 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0' 00:21:27.244 * cpu4 nice samples: 0 0 0 0 0 00:21:27.244 20:16:24 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 4 '10277 10277 10277 10277 10277' 00:21:27.244 * cpu4 system samples: 10277 10277 10277 10277 10277 00:21:27.244 20:16:24 -- scheduler/common.sh@644 -- # user_load=100 00:21:27.244 20:16:24 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:27.244 20:16:24 -- scheduler/common.sh@648 -- # (( user_load <= 15 )) 00:21:27.244 20:16:24 -- scheduler/common.sh@652 -- # printf '* cpu%u is not idle\n' 4 00:21:27.244 * cpu4 is not idle 00:21:27.244 20:16:24 -- scheduler/common.sh@653 -- # is_idle[cpu]=0 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]' 00:21:27.244 20:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:27.244 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:21:27.244 20:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@56 -- # reactor_framework='{ 00:21:27.244 "lcore": 1, 00:21:27.244 "busy": 3624299904, 00:21:27.244 "idle": 58698694060, 00:21:27.244 "in_interrupt": false, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [ 00:21:27.244 { 00:21:27.244 "name": "app_thread", 00:21:27.244 "id": 1, 00:21:27.244 "cpumask": "2", 00:21:27.244 "elapsed": 62342180324 00:21:27.244 } 00:21:27.244 ] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 2, 00:21:27.244 "busy": 40717664200, 00:21:27.244 "idle": 2995903816, 00:21:27.244 "in_interrupt": false, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [ 00:21:27.244 { 00:21:27.244 "name": "thread2", 00:21:27.244 "id": 2, 00:21:27.244 "cpumask": "4", 00:21:27.244 "elapsed": 40558219328 00:21:27.244 } 00:21:27.244 ] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 3, 00:21:27.244 "busy": 27145109774, 00:21:27.244 "idle": 3225138366, 00:21:27.244 "in_interrupt": false, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [ 00:21:27.244 { 00:21:27.244 "name": "thread3", 00:21:27.244 "id": 3, 00:21:27.244 "cpumask": "8", 00:21:27.244 "elapsed": 26755430734 00:21:27.244 } 00:21:27.244 ] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 4, 00:21:27.244 "busy": 13342416648, 00:21:27.244 "idle": 3225290780, 00:21:27.244 "in_interrupt": false, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [ 00:21:27.244 { 00:21:27.244 "name": "thread4", 00:21:27.244 "id": 4, 00:21:27.244 "cpumask": "10", 00:21:27.244 "elapsed": 12722621146 00:21:27.244 } 00:21:27.244 ] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 37, 00:21:27.244 "busy": 0, 00:21:27.244 "idle": 2305849582, 00:21:27.244 "in_interrupt": true, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 38, 00:21:27.244 "busy": 0, 00:21:27.244 "idle": 2306099682, 00:21:27.244 "in_interrupt": true, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 39, 00:21:27.244 "busy": 0, 00:21:27.244 "idle": 2306327486, 00:21:27.244 "in_interrupt": true, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [] 00:21:27.244 } 00:21:27.244 { 00:21:27.244 "lcore": 40, 00:21:27.244 "busy": 0, 00:21:27.244 "idle": 2306516116, 00:21:27.244 "in_interrupt": true, 00:21:27.244 "core_freq": 2300, 00:21:27.244 "lw_threads": [] 00:21:27.244 }' 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 4) | .lw_threads[] | select(.name == "thread4")' 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@57 -- # [[ -n { 00:21:27.244 "name": "thread4", 00:21:27.244 "id": 4, 00:21:27.244 "cpumask": "10", 00:21:27.244 "elapsed": 12722621146 00:21:27.244 } ]] 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 )) 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@64 -- # active_thread 2 0 00:21:27.244 20:16:24 -- scheduler/common.sh@472 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 2 0 00:21:27.244 20:16:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:27.244 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:21:27.244 20:16:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:21:27.244 20:16:24 -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:21:27.244 20:16:24 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:27.244 20:16:24 -- scheduler/common.sh@621 -- # local time=5 00:21:27.244 20:16:24 -- scheduler/common.sh@622 -- # local cpu 00:21:27.244 20:16:24 -- scheduler/common.sh@623 -- # local samples 00:21:27.244 20:16:24 -- scheduler/common.sh@624 -- # is_idle=() 00:21:27.244 20:16:24 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:27.244 20:16:24 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5 00:21:27.244 Collecting cpu idle stats (cpus: 2) for 5 seconds... 00:21:27.244 20:16:24 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 2 00:21:27.244 20:16:24 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:27.244 20:16:24 -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 20:16:30 -- scheduler/common.sh@631 -- # local user_load 00:21:33.801 20:16:30 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:33.801 20:16:30 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:33.801 20:16:30 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 2 '0 0 35 100 100' 47 00:21:33.801 * cpu2 idle samples: 0 0 35 100 100 (avg: 47%) 00:21:33.801 20:16:30 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 2 user 00:21:33.801 20:16:30 -- scheduler/common.sh@659 -- # local cpu=2 time=user 00:21:33.801 20:16:30 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:33.801 20:16:30 -- scheduler/common.sh@663 -- # [[ -v raw_samples_2 ]] 00:21:33.801 20:16:30 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_2 00:21:33.801 20:16:30 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:33.801 20:16:30 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:33.801 20:16:30 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:33.801 20:16:30 -- scheduler/common.sh@671 -- # case "$time" in 00:21:33.801 20:16:30 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:33.801 20:16:30 -- scheduler/common.sh@672 -- # trap - ERR 00:21:33.801 20:16:30 -- scheduler/common.sh@672 -- # print_backtrace 00:21:33.801 20:16:30 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:33.801 20:16:30 -- common/autotest_common.sh@1132 -- # return 0 00:21:33.801 20:16:30 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:33.801 20:16:30 -- scheduler/common.sh@678 -- # usage=0 00:21:33.801 20:16:30 -- scheduler/common.sh@679 -- # usage=0 00:21:33.801 20:16:30 -- scheduler/common.sh@681 -- # printf %u 0 00:21:33.801 20:16:30 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 2 user 0 00:21:33.801 * cpu2 user usage: 0 00:21:33.801 20:16:30 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 2 '166524 166626 166692 166692 166692' 00:21:33.801 * cpu2 user samples: 166524 166626 166692 166692 166692 00:21:33.801 20:16:30 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0' 00:21:33.801 * cpu2 nice samples: 0 0 0 0 0 00:21:33.801 20:16:30 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 2 '10453 10453 10453 10453 10453' 00:21:33.801 * cpu2 system samples: 10453 10453 10453 10453 10453 00:21:33.801 20:16:30 -- scheduler/common.sh@644 -- # user_load=0 00:21:33.801 20:16:30 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:33.801 20:16:30 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 2 00:21:33.801 * cpu2 is idle 00:21:33.801 20:16:30 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:33.801 20:16:30 -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:21:33.801 20:16:30 -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:21:33.801 20:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.801 20:16:30 -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 20:16:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:21:33.801 "lcore": 1, 00:21:33.801 "busy": 3642330706, 00:21:33.801 "idle": 73616404996, 00:21:33.801 "in_interrupt": false, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [ 00:21:33.801 { 00:21:33.801 "name": "app_thread", 00:21:33.801 "id": 1, 00:21:33.801 "cpumask": "2", 00:21:33.801 "elapsed": 77277899106 00:21:33.801 }, 00:21:33.801 { 00:21:33.801 "name": "thread2", 00:21:33.801 "id": 2, 00:21:33.801 "cpumask": "4", 00:21:33.801 "elapsed": 10404748256 00:21:33.801 } 00:21:33.801 ] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 2, 00:21:33.801 "busy": 41408088284, 00:21:33.801 "idle": 9207437474, 00:21:33.801 "in_interrupt": true, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 3, 00:21:33.801 "busy": 41867774734, 00:21:33.801 "idle": 3225138366, 00:21:33.801 "in_interrupt": false, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [ 00:21:33.801 { 00:21:33.801 "name": "thread3", 00:21:33.801 "id": 3, 00:21:33.801 "cpumask": "8", 00:21:33.801 "elapsed": 41691149516 00:21:33.801 } 00:21:33.801 ] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 4, 00:21:33.801 "busy": 28065171618, 00:21:33.801 "idle": 3225290780, 00:21:33.801 "in_interrupt": false, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [ 00:21:33.801 { 00:21:33.801 "name": "thread4", 00:21:33.801 "id": 4, 00:21:33.801 "cpumask": "10", 00:21:33.801 "elapsed": 27658339928 00:21:33.801 } 00:21:33.801 ] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 37, 00:21:33.801 "busy": 0, 00:21:33.801 "idle": 2305849582, 00:21:33.801 "in_interrupt": true, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 38, 00:21:33.801 "busy": 0, 00:21:33.801 "idle": 2306099682, 00:21:33.801 "in_interrupt": true, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 39, 00:21:33.801 "busy": 0, 00:21:33.801 "idle": 2306327486, 00:21:33.801 "in_interrupt": true, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [] 00:21:33.801 } 00:21:33.801 { 00:21:33.801 "lcore": 40, 00:21:33.801 "busy": 0, 00:21:33.801 "idle": 2306516116, 00:21:33.801 "in_interrupt": true, 00:21:33.801 "core_freq": 2300, 00:21:33.801 "lw_threads": [] 00:21:33.801 }' 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id' 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread2")' 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@69 -- # [[ -n { 00:21:33.801 "name": "thread2", 00:21:33.801 "id": 2, 00:21:33.801 "cpumask": "4", 00:21:33.801 "elapsed": 10404748256 00:21:33.801 } ]] 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@64 -- # active_thread 3 0 00:21:33.801 20:16:31 -- scheduler/common.sh@472 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 3 0 00:21:33.801 20:16:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.801 20:16:31 -- common/autotest_common.sh@10 -- # set +x 00:21:33.801 20:16:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:21:33.801 20:16:31 -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:21:33.801 20:16:31 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:33.801 20:16:31 -- scheduler/common.sh@621 -- # local time=5 00:21:33.801 20:16:31 -- scheduler/common.sh@622 -- # local cpu 00:21:33.801 20:16:31 -- scheduler/common.sh@623 -- # local samples 00:21:33.801 20:16:31 -- scheduler/common.sh@624 -- # is_idle=() 00:21:33.801 20:16:31 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:33.801 20:16:31 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5 00:21:33.801 Collecting cpu idle stats (cpus: 3) for 5 seconds... 00:21:33.801 20:16:31 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 3 00:21:33.801 20:16:31 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:33.801 20:16:31 -- common/autotest_common.sh@10 -- # set +x 00:21:40.364 20:16:37 -- scheduler/common.sh@631 -- # local user_load 00:21:40.364 20:16:37 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:40.364 20:16:37 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:40.364 20:16:37 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 3 '0 0 85 100 100' 57 00:21:40.364 * cpu3 idle samples: 0 0 85 100 100 (avg: 57%) 00:21:40.364 20:16:37 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 3 user 00:21:40.364 20:16:37 -- scheduler/common.sh@659 -- # local cpu=3 time=user 00:21:40.364 20:16:37 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:40.364 20:16:37 -- scheduler/common.sh@663 -- # [[ -v raw_samples_3 ]] 00:21:40.364 20:16:37 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_3 00:21:40.364 20:16:37 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:40.364 20:16:37 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:40.364 20:16:37 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:40.364 20:16:37 -- scheduler/common.sh@671 -- # case "$time" in 00:21:40.364 20:16:37 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:40.364 20:16:37 -- scheduler/common.sh@672 -- # trap - ERR 00:21:40.364 20:16:37 -- scheduler/common.sh@672 -- # print_backtrace 00:21:40.364 20:16:37 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:40.364 20:16:37 -- common/autotest_common.sh@1132 -- # return 0 00:21:40.364 20:16:37 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:40.364 20:16:37 -- scheduler/common.sh@678 -- # usage=0 00:21:40.364 20:16:37 -- scheduler/common.sh@679 -- # usage=0 00:21:40.364 20:16:37 -- scheduler/common.sh@681 -- # printf %u 0 00:21:40.364 20:16:37 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 3 user 0 00:21:40.364 * cpu3 user usage: 0 00:21:40.364 20:16:37 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 3 '148141 148243 148258 148258 148258' 00:21:40.364 * cpu3 user samples: 148141 148243 148258 148258 148258 00:21:40.364 20:16:37 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 3 '17 17 17 17 17' 00:21:40.364 * cpu3 nice samples: 17 17 17 17 17 00:21:40.364 20:16:37 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 3 '10060 10060 10060 10060 10060' 00:21:40.364 * cpu3 system samples: 10060 10060 10060 10060 10060 00:21:40.364 20:16:37 -- scheduler/common.sh@644 -- # user_load=0 00:21:40.364 20:16:37 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:40.364 20:16:37 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 3 00:21:40.364 * cpu3 is idle 00:21:40.364 20:16:37 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:21:40.364 20:16:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.364 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:40.364 20:16:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:21:40.364 "lcore": 1, 00:21:40.364 "busy": 3660417618, 00:21:40.364 "idle": 88558685066, 00:21:40.364 "in_interrupt": false, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [ 00:21:40.364 { 00:21:40.364 "name": "app_thread", 00:21:40.364 "id": 1, 00:21:40.364 "cpumask": "2", 00:21:40.364 "elapsed": 92238243520 00:21:40.364 }, 00:21:40.364 { 00:21:40.364 "name": "thread2", 00:21:40.364 "id": 2, 00:21:40.364 "cpumask": "4", 00:21:40.364 "elapsed": 25365092670 00:21:40.364 }, 00:21:40.364 { 00:21:40.364 "name": "thread3", 00:21:40.364 "id": 3, 00:21:40.364 "cpumask": "8", 00:21:40.364 "elapsed": 11562416342 00:21:40.364 } 00:21:40.364 ] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 2, 00:21:40.364 "busy": 41408088284, 00:21:40.364 "idle": 9207437474, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 3, 00:21:40.364 "busy": 42558182312, 00:21:40.364 "idle": 8286437518, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 4, 00:21:40.364 "busy": 42787857852, 00:21:40.364 "idle": 3225290780, 00:21:40.364 "in_interrupt": false, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [ 00:21:40.364 { 00:21:40.364 "name": "thread4", 00:21:40.364 "id": 4, 00:21:40.364 "cpumask": "10", 00:21:40.364 "elapsed": 42618684342 00:21:40.364 } 00:21:40.364 ] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 37, 00:21:40.364 "busy": 0, 00:21:40.364 "idle": 2305849582, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 38, 00:21:40.364 "busy": 0, 00:21:40.364 "idle": 2306099682, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 39, 00:21:40.364 "busy": 0, 00:21:40.364 "idle": 2306327486, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 } 00:21:40.364 { 00:21:40.364 "lcore": 40, 00:21:40.364 "busy": 0, 00:21:40.364 "idle": 2306516116, 00:21:40.364 "in_interrupt": true, 00:21:40.364 "core_freq": 2300, 00:21:40.364 "lw_threads": [] 00:21:40.364 }' 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id' 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread3")' 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@69 -- # [[ -n { 00:21:40.364 "name": "thread3", 00:21:40.364 "id": 3, 00:21:40.364 "cpumask": "8", 00:21:40.364 "elapsed": 11562416342 00:21:40.364 } ]] 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}" 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@64 -- # active_thread 4 0 00:21:40.364 20:16:37 -- scheduler/common.sh@472 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 4 0 00:21:40.364 20:16:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.364 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:40.364 20:16:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu") 00:21:40.364 20:16:37 -- scheduler/interrupt.sh@66 -- # collect_cpu_idle 00:21:40.364 20:16:37 -- scheduler/common.sh@619 -- # (( 1 > 0 )) 00:21:40.364 20:16:37 -- scheduler/common.sh@621 -- # local time=5 00:21:40.364 20:16:37 -- scheduler/common.sh@622 -- # local cpu 00:21:40.364 20:16:37 -- scheduler/common.sh@623 -- # local samples 00:21:40.364 20:16:37 -- scheduler/common.sh@624 -- # is_idle=() 00:21:40.364 20:16:37 -- scheduler/common.sh@624 -- # local -g is_idle 00:21:40.364 20:16:37 -- scheduler/common.sh@626 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5 00:21:40.365 Collecting cpu idle stats (cpus: 4) for 5 seconds... 00:21:40.365 20:16:37 -- scheduler/common.sh@629 -- # get_cpu_time 5 idle 0 1 4 00:21:40.365 20:16:37 -- scheduler/common.sh@476 -- # xtrace_disable 00:21:40.365 20:16:37 -- common/autotest_common.sh@10 -- # set +x 00:21:46.927 20:16:43 -- scheduler/common.sh@631 -- # local user_load 00:21:46.927 20:16:43 -- scheduler/common.sh@632 -- # for cpu in "${cpus_to_collect[@]}" 00:21:46.927 20:16:43 -- scheduler/common.sh@633 -- # samples=(${cpu_times[cpu]}) 00:21:46.927 20:16:43 -- scheduler/common.sh@634 -- # printf '* cpu%u idle samples: %s (avg: %u%%)\n' 4 '0 0 35 100 100' 47 00:21:46.927 * cpu4 idle samples: 0 0 35 100 100 (avg: 47%) 00:21:46.927 20:16:43 -- scheduler/common.sh@644 -- # cpu_usage_clk_tck 4 user 00:21:46.927 20:16:43 -- scheduler/common.sh@659 -- # local cpu=4 time=user 00:21:46.927 20:16:43 -- scheduler/common.sh@660 -- # local user nice system usage clk_delta 00:21:46.927 20:16:43 -- scheduler/common.sh@663 -- # [[ -v raw_samples_4 ]] 00:21:46.927 20:16:43 -- scheduler/common.sh@665 -- # local -n raw_samples=raw_samples_4 00:21:46.927 20:16:43 -- scheduler/common.sh@666 -- # user=("${!raw_samples[cpu_time_map["user"]]}") 00:21:46.927 20:16:43 -- scheduler/common.sh@667 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}") 00:21:46.927 20:16:43 -- scheduler/common.sh@668 -- # system=("${!raw_samples[cpu_time_map["system"]]}") 00:21:46.927 20:16:43 -- scheduler/common.sh@671 -- # case "$time" in 00:21:46.927 20:16:43 -- scheduler/common.sh@672 -- # (( clk_delta += (user[-1] - user[-2]) )) 00:21:46.927 20:16:43 -- scheduler/common.sh@672 -- # trap - ERR 00:21:46.927 20:16:43 -- scheduler/common.sh@672 -- # print_backtrace 00:21:46.927 20:16:43 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:21:46.927 20:16:43 -- common/autotest_common.sh@1132 -- # return 0 00:21:46.927 20:16:43 -- scheduler/common.sh@678 -- # getconf CLK_TCK 00:21:46.927 20:16:43 -- scheduler/common.sh@678 -- # usage=0 00:21:46.927 20:16:43 -- scheduler/common.sh@679 -- # usage=0 00:21:46.927 20:16:43 -- scheduler/common.sh@681 -- # printf %u 0 00:21:46.927 20:16:43 -- scheduler/common.sh@682 -- # printf '* cpu%u %s usage: %u\n' 4 user 0 00:21:46.927 * cpu4 user usage: 0 00:21:46.927 20:16:43 -- scheduler/common.sh@683 -- # printf '* cpu%u user samples: %s\n' 4 '48003 48104 48169 48169 48169' 00:21:46.927 * cpu4 user samples: 48003 48104 48169 48169 48169 00:21:46.927 20:16:43 -- scheduler/common.sh@684 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0' 00:21:46.927 * cpu4 nice samples: 0 0 0 0 0 00:21:46.927 20:16:43 -- scheduler/common.sh@685 -- # printf '* cpu%u system samples: %s\n' 4 '10277 10277 10277 10277 10277' 00:21:46.927 * cpu4 system samples: 10277 10277 10277 10277 10277 00:21:46.927 20:16:43 -- scheduler/common.sh@644 -- # user_load=0 00:21:46.927 20:16:43 -- scheduler/common.sh@645 -- # (( samples[-1] >= 70 )) 00:21:46.927 20:16:43 -- scheduler/common.sh@646 -- # printf '* cpu%u is idle\n' 4 00:21:46.927 * cpu4 is idle 00:21:46.927 20:16:43 -- scheduler/common.sh@647 -- # is_idle[cpu]=1 00:21:46.927 20:16:43 -- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors 00:21:46.927 20:16:43 -- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]' 00:21:46.928 20:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.928 20:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 20:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@67 -- # reactor_framework='{ 00:21:46.928 "lcore": 1, 00:21:46.928 "busy": 3684253912, 00:21:46.928 "idle": 103259782604, 00:21:46.928 "in_interrupt": false, 00:21:46.928 "core_freq": 1900, 00:21:46.928 "lw_threads": [ 00:21:46.928 { 00:21:46.928 "name": "app_thread", 00:21:46.928 "id": 1, 00:21:46.928 "cpumask": "2", 00:21:46.928 "elapsed": 106963130186 00:21:46.928 }, 00:21:46.928 { 00:21:46.928 "name": "thread2", 00:21:46.928 "id": 2, 00:21:46.928 "cpumask": "4", 00:21:46.928 "elapsed": 40089979336 00:21:46.928 }, 00:21:46.928 { 00:21:46.928 "name": "thread3", 00:21:46.928 "id": 3, 00:21:46.928 "cpumask": "8", 00:21:46.928 "elapsed": 26287303008 00:21:46.928 }, 00:21:46.928 { 00:21:46.928 "name": "thread4", 00:21:46.928 "id": 4, 00:21:46.928 "cpumask": "10", 00:21:46.928 "elapsed": 10198634292 00:21:46.928 } 00:21:46.928 ] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 2, 00:21:46.928 "busy": 41408088284, 00:21:46.928 "idle": 9207437474, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 3, 00:21:46.928 "busy": 42558182312, 00:21:46.928 "idle": 8286437518, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 4, 00:21:46.928 "busy": 43248223518, 00:21:46.928 "idle": 9422512758, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 37, 00:21:46.928 "busy": 0, 00:21:46.928 "idle": 2305849582, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 38, 00:21:46.928 "busy": 0, 00:21:46.928 "idle": 2306099682, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 39, 00:21:46.928 "busy": 0, 00:21:46.928 "idle": 2306327486, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 } 00:21:46.928 { 00:21:46.928 "lcore": 40, 00:21:46.928 "busy": 0, 00:21:46.928 "idle": 2306516116, 00:21:46.928 "in_interrupt": true, 00:21:46.928 "core_freq": 2300, 00:21:46.928 "lw_threads": [] 00:21:46.928 }' 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id' 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@68 -- # [[ -z '' ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread4")' 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@69 -- # [[ -n { 00:21:46.928 "name": "thread4", 00:21:46.928 "id": 4, 00:21:46.928 "cpumask": "10", 00:21:46.928 "elapsed": 10198634292 00:21:46.928 } ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 )) 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@74 -- # destroy_thread 2 00:21:46.928 20:16:43 -- scheduler/common.sh@468 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 2 00:21:46.928 20:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.928 20:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 20:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@74 -- # destroy_thread 3 00:21:46.928 20:16:43 -- scheduler/common.sh@468 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 3 00:21:46.928 20:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.928 20:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 20:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}" 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@74 -- # destroy_thread 4 00:21:46.928 20:16:43 -- scheduler/common.sh@468 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 4 00:21:46.928 20:16:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:46.928 20:16:43 -- common/autotest_common.sh@10 -- # set +x 00:21:46.928 20:16:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:46.928 20:16:43 -- scheduler/interrupt.sh@1 -- # killprocess 2185987 00:21:46.928 20:16:43 -- common/autotest_common.sh@926 -- # '[' -z 2185987 ']' 00:21:46.928 20:16:43 -- common/autotest_common.sh@930 -- # kill -0 2185987 00:21:46.928 20:16:43 -- common/autotest_common.sh@931 -- # uname 00:21:46.928 20:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:46.928 20:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2185987 00:21:46.928 20:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:46.928 20:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:46.928 20:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2185987' 00:21:46.928 killing process with pid 2185987 00:21:46.928 20:16:43 -- common/autotest_common.sh@945 -- # kill 2185987 00:21:46.928 [2024-04-25 20:16:43.952096] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:21:46.928 20:16:43 -- common/autotest_common.sh@950 -- # wait 2185987 00:21:46.928 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:21:46.928 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:21:46.928 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 4 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 37 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 38 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 39 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original 00:21:46.929 POWER: Power management governor of lcore 40 has been set to 'powersave' successfully 00:21:46.929 POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original 00:21:46.929 00:21:46.929 real 0m47.611s 00:21:46.929 user 2m0.422s 00:21:46.929 sys 0m1.490s 00:21:46.929 20:16:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.929 20:16:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 ************************************ 00:21:46.929 END TEST interrupt_mode 00:21:46.929 ************************************ 00:21:46.929 20:16:44 -- scheduler/scheduler.sh@1 -- # restore_cgroups 00:21:46.929 20:16:44 -- scheduler/isolate_cores.sh@12 -- # xtrace_disable 00:21:46.929 20:16:44 -- common/autotest_common.sh@10 -- # set +x 00:21:46.929 Moved 0 processes, failed 0 00:21:47.496 Moved 98 processes, failed 4 00:21:47.496 rmdir: failed to remove '/sys/fs/cgroup//cpuset/all': Device or resource busy 00:21:48.064 Moved 97 processes, failed 4 00:21:48.064 rmdir: failed to remove '/sys/fs/cgroup//cpuset': Device or resource busy 00:21:48.064 00:21:48.064 real 1m38.491s 00:21:48.064 user 3m18.100s 00:21:48.064 sys 0m19.792s 00:21:48.064 20:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.064 20:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:48.064 ************************************ 00:21:48.064 END TEST scheduler 00:21:48.064 ************************************ 00:21:48.064 20:16:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:48.064 20:16:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:21:48.064 20:16:45 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:21:48.064 20:16:45 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:21:48.064 20:16:45 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:21:48.064 20:16:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.064 20:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:48.064 20:16:45 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:21:48.064 20:16:45 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:21:48.064 20:16:45 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:21:48.064 20:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:53.339 INFO: APP EXITING 00:21:53.339 INFO: killing all VMs 00:21:53.339 INFO: killing vhost app 00:21:53.339 INFO: EXIT DONE 00:21:54.711 Waiting for block devices as requested 00:21:54.711 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:21:54.711 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:54.711 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:54.970 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:54.970 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:54.970 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:55.228 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:55.228 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:55.228 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:55.487 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:21:55.487 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:21:55.487 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:21:55.746 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:21:55.746 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:21:55.746 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:21:56.004 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:21:56.004 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:21:58.535 Cleaning 00:21:58.535 Removing: /var/run/dpdk/spdk0/config 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:21:58.535 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:58.535 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:58.535 Removing: /dev/shm/bdevperf_trace.pid2165858 00:21:58.535 Removing: /dev/shm/spdk_tgt_trace.pid2042856 00:21:58.535 Removing: /var/run/dpdk/spdk0 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2040323 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2041500 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2042856 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2043485 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2043833 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2044125 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2044417 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2044786 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2044983 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2045182 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2045407 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2046170 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2048891 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2049130 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2049513 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2049691 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2050341 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2050452 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2051280 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2051390 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2051771 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2051936 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2052165 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2052182 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2052807 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053003 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053245 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053500 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053647 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053714 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2053921 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2054189 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2054420 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2054634 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2054823 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2055017 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2055203 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2055399 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2055586 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2055797 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2056039 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2056314 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2056516 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2056711 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2056898 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2057097 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2057275 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2057480 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2057658 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2057906 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2058137 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2058404 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2058589 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2058791 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2058969 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2059173 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2059351 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2059555 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2059733 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2059994 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2060225 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2060476 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2060665 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2060864 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2061051 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2061256 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2061440 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2061638 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2061883 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2062182 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2062267 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2062676 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2063157 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2064776 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2065779 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2068523 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2070159 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2071928 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2073053 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2073078 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2073309 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2077452 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2078404 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2081410 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2083039 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2084823 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2085941 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2085966 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2086117 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2099782 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2101202 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2102092 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2103005 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2106024 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2111474 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2115506 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2122594 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2127865 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2134526 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2135656 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2143147 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2157147 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2157388 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2160689 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2163474 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2164208 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2165097 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2165858 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2166274 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2167386 00:21:58.535 Removing: /var/run/dpdk/spdk_pid2168528 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2169100 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2169865 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2170230 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2170461 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2178512 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2182637 00:21:58.536 Removing: /var/run/dpdk/spdk_pid2185987 00:21:58.536 Clean 00:21:58.536 killing process with pid 1997804 00:22:05.135 killing process with pid 1997801 00:22:05.135 killing process with pid 1997803 00:22:05.135 killing process with pid 1997802 00:22:05.136 20:17:02 -- common/autotest_common.sh@1436 -- # return 0 00:22:05.136 20:17:02 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:22:05.136 20:17:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:05.136 20:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:05.136 20:17:02 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:22:05.136 20:17:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:05.136 20:17:02 -- common/autotest_common.sh@10 -- # set +x 00:22:05.136 20:17:02 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt 00:22:05.136 20:17:02 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log ]] 00:22:05.136 20:17:02 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log 00:22:05.136 20:17:02 -- spdk/autotest.sh@394 -- # hash lcov 00:22:05.136 20:17:02 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:22:05.136 20:17:02 -- spdk/autotest.sh@396 -- # hostname 00:22:05.136 20:17:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvme-phy-autotest/spdk -t spdk-wfp-45 -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info 00:22:05.136 geninfo: WARNING: invalid characters removed from testname! 00:22:31.664 20:17:29 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:34.956 20:17:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:37.489 20:17:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:40.776 20:17:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:42.679 20:17:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:45.210 20:17:43 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info 00:22:47.788 20:17:45 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:47.788 20:17:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh 00:22:47.788 20:17:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:47.788 20:17:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.788 20:17:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.788 20:17:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.788 20:17:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.788 20:17:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.788 20:17:45 -- paths/export.sh@5 -- $ export PATH 00:22:47.788 20:17:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.788 20:17:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output 00:22:47.788 20:17:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:22:47.788 20:17:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714069065.XXXXXX 00:22:47.788 20:17:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714069065.ez9V9S 00:22:47.788 20:17:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:22:47.788 20:17:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:22:47.788 20:17:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/' 00:22:47.788 20:17:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp' 00:22:47.788 20:17:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:22:48.066 20:17:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:22:48.066 20:17:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:22:48.066 20:17:45 -- common/autotest_common.sh@10 -- $ set +x 00:22:48.066 20:17:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk' 00:22:48.066 20:17:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72 00:22:48.066 20:17:45 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk 00:22:48.066 20:17:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:48.066 20:17:45 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:48.066 20:17:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:48.066 20:17:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:48.066 20:17:45 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:48.066 20:17:45 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:48.066 20:17:45 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt 00:22:48.066 20:17:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:48.066 + [[ -n 1944437 ]] 00:22:48.066 + sudo kill 1944437 00:22:48.079 [Pipeline] } 00:22:48.100 [Pipeline] // stage 00:22:48.105 [Pipeline] } 00:22:48.125 [Pipeline] // timeout 00:22:48.130 [Pipeline] } 00:22:48.150 [Pipeline] // catchError 00:22:48.155 [Pipeline] } 00:22:48.173 [Pipeline] // wrap 00:22:48.178 [Pipeline] } 00:22:48.196 [Pipeline] // catchError 00:22:48.206 [Pipeline] stage 00:22:48.208 [Pipeline] { (Epilogue) 00:22:48.224 [Pipeline] catchError 00:22:48.226 [Pipeline] { 00:22:48.243 [Pipeline] echo 00:22:48.245 Cleanup processes 00:22:48.252 [Pipeline] sh 00:22:48.540 + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:22:48.540 2201080 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:22:48.554 [Pipeline] sh 00:22:48.841 ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk 00:22:48.841 ++ grep -v 'sudo pgrep' 00:22:48.841 ++ awk '{print $1}' 00:22:48.841 + sudo kill -9 00:22:48.841 + true 00:22:48.853 [Pipeline] sh 00:22:49.136 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:01.354 [Pipeline] sh 00:23:01.639 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:01.639 Artifacts sizes are good 00:23:01.655 [Pipeline] archiveArtifacts 00:23:01.663 Archiving artifacts 00:23:01.854 [Pipeline] sh 00:23:02.139 + sudo chown -R sys_sgci /var/jenkins/workspace/nvme-phy-autotest 00:23:02.415 [Pipeline] cleanWs 00:23:02.425 [WS-CLEANUP] Deleting project workspace... 00:23:02.425 [WS-CLEANUP] Deferred wipeout is used... 00:23:02.432 [WS-CLEANUP] done 00:23:02.434 [Pipeline] } 00:23:02.457 [Pipeline] // catchError 00:23:02.475 [Pipeline] sh 00:23:02.760 + logger -p user.info -t JENKINS-CI 00:23:02.769 [Pipeline] } 00:23:02.785 [Pipeline] // stage 00:23:02.791 [Pipeline] } 00:23:02.809 [Pipeline] // node 00:23:02.815 [Pipeline] End of Pipeline 00:23:02.853 Finished: SUCCESS