00:00:00.001  Started by upstream project "autotest-nightly-lts" build number 2466
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3727
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.106  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.106  The recommended git tool is: git
00:00:00.106  using credential 00000000-0000-0000-0000-000000000002
00:00:00.108   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.150  Fetching changes from the remote Git repository
00:00:00.151   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.193  Using shallow fetch with depth 1
00:00:00.193  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.193   > git --version # timeout=10
00:00:00.228   > git --version # 'git version 2.39.2'
00:00:00.228  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.254  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.254   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.054   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.065   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.077  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.077   > git config core.sparsecheckout # timeout=10
00:00:07.087   > git read-tree -mu HEAD # timeout=10
00:00:07.103   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.120  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.120   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.209  [Pipeline] Start of Pipeline
00:00:07.224  [Pipeline] library
00:00:07.225  Loading library shm_lib@master
00:00:07.225  Library shm_lib@master is cached. Copying from home.
00:00:07.243  [Pipeline] node
00:00:07.325  Running on WFP45 in /var/jenkins/workspace/nvme-phy-autotest
00:00:07.327  [Pipeline] {
00:00:07.337  [Pipeline] catchError
00:00:07.338  [Pipeline] {
00:00:07.348  [Pipeline] wrap
00:00:07.354  [Pipeline] {
00:00:07.360  [Pipeline] stage
00:00:07.361  [Pipeline] { (Prologue)
00:00:07.558  [Pipeline] sh
00:00:07.837  + logger -p user.info -t JENKINS-CI
00:00:07.849  [Pipeline] echo
00:00:07.850  Node: WFP45
00:00:07.855  [Pipeline] sh
00:00:08.158  [Pipeline] setCustomBuildProperty
00:00:08.172  [Pipeline] echo
00:00:08.174  Cleanup processes
00:00:08.180  [Pipeline] sh
00:00:08.467  + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:08.467  1979460 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:08.484  [Pipeline] sh
00:00:08.774  ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:08.774  ++ grep -v 'sudo pgrep'
00:00:08.774  ++ awk '{print $1}'
00:00:08.774  + sudo kill -9
00:00:08.774  + true
00:00:08.788  [Pipeline] cleanWs
00:00:08.797  [WS-CLEANUP] Deleting project workspace...
00:00:08.797  [WS-CLEANUP] Deferred wipeout is used...
00:00:08.803  [WS-CLEANUP] done
00:00:08.808  [Pipeline] setCustomBuildProperty
00:00:08.824  [Pipeline] sh
00:00:09.111  + sudo git config --global --replace-all safe.directory '*'
00:00:09.199  [Pipeline] httpRequest
00:00:09.936  [Pipeline] echo
00:00:09.938  Sorcerer 10.211.164.20 is alive
00:00:09.948  [Pipeline] retry
00:00:09.950  [Pipeline] {
00:00:09.964  [Pipeline] httpRequest
00:00:09.968  HttpMethod: GET
00:00:09.969  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:09.969  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:09.980  Response Code: HTTP/1.1 200 OK
00:00:09.980  Success: Status code 200 is in the accepted range: 200,404
00:00:09.981  Saving response body to /var/jenkins/workspace/nvme-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:12.764  [Pipeline] }
00:00:12.779  [Pipeline] // retry
00:00:12.786  [Pipeline] sh
00:00:13.068  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:13.345  [Pipeline] httpRequest
00:00:13.959  [Pipeline] echo
00:00:13.961  Sorcerer 10.211.164.20 is alive
00:00:13.970  [Pipeline] retry
00:00:13.972  [Pipeline] {
00:00:13.985  [Pipeline] httpRequest
00:00:13.990  HttpMethod: GET
00:00:13.990  URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:13.991  Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:13.994  Response Code: HTTP/1.1 200 OK
00:00:13.994  Success: Status code 200 is in the accepted range: 200,404
00:00:13.995  Saving response body to /var/jenkins/workspace/nvme-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:31.891  [Pipeline] }
00:00:31.909  [Pipeline] // retry
00:00:31.917  [Pipeline] sh
00:00:32.201  + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz
00:00:36.404  [Pipeline] sh
00:00:36.689  + git -C spdk log --oneline -n5
00:00:36.689  c13c99a5e test: Various fixes for Fedora40
00:00:36.689  726a04d70 test/nvmf: adjust timeout for bigger nvmes
00:00:36.689  61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11
00:00:36.689  7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched
00:00:36.689  ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges
00:00:36.699  [Pipeline] }
00:00:36.712  [Pipeline] // stage
00:00:36.721  [Pipeline] stage
00:00:36.723  [Pipeline] { (Prepare)
00:00:36.738  [Pipeline] writeFile
00:00:36.753  [Pipeline] sh
00:00:37.037  + logger -p user.info -t JENKINS-CI
00:00:37.050  [Pipeline] sh
00:00:37.335  + logger -p user.info -t JENKINS-CI
00:00:37.347  [Pipeline] sh
00:00:37.631  + cat autorun-spdk.conf
00:00:37.631  SPDK_RUN_FUNCTIONAL_TEST=1
00:00:37.631  SPDK_TEST_IOAT=1
00:00:37.631  SPDK_TEST_NVME=1
00:00:37.631  SPDK_TEST_NVME_CLI=1
00:00:37.631  SPDK_TEST_OCF=1
00:00:37.631  SPDK_RUN_UBSAN=1
00:00:37.631  SPDK_TEST_NVME_CUSE=1
00:00:37.631  SPDK_TEST_SCHEDULER=1
00:00:37.638  RUN_NIGHTLY=1
00:00:37.642  [Pipeline] readFile
00:00:37.665  [Pipeline] withEnv
00:00:37.667  [Pipeline] {
00:00:37.679  [Pipeline] sh
00:00:37.966  + set -ex
00:00:37.966  + [[ -f /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf ]]
00:00:37.966  + source /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf
00:00:37.966  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:37.966  ++ SPDK_TEST_IOAT=1
00:00:37.966  ++ SPDK_TEST_NVME=1
00:00:37.966  ++ SPDK_TEST_NVME_CLI=1
00:00:37.966  ++ SPDK_TEST_OCF=1
00:00:37.966  ++ SPDK_RUN_UBSAN=1
00:00:37.966  ++ SPDK_TEST_NVME_CUSE=1
00:00:37.966  ++ SPDK_TEST_SCHEDULER=1
00:00:37.966  ++ RUN_NIGHTLY=1
00:00:37.966  + case $SPDK_TEST_NVMF_NICS in
00:00:37.966  + DRIVERS=
00:00:37.966  + [[ -n '' ]]
00:00:37.966  + exit 0
00:00:37.976  [Pipeline] }
00:00:37.989  [Pipeline] // withEnv
00:00:37.994  [Pipeline] }
00:00:38.008  [Pipeline] // stage
00:00:38.016  [Pipeline] catchError
00:00:38.018  [Pipeline] {
00:00:38.030  [Pipeline] timeout
00:00:38.030  Timeout set to expire in 40 min
00:00:38.032  [Pipeline] {
00:00:38.045  [Pipeline] stage
00:00:38.047  [Pipeline] { (Tests)
00:00:38.060  [Pipeline] sh
00:00:38.347  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvme-phy-autotest
00:00:38.347  ++ readlink -f /var/jenkins/workspace/nvme-phy-autotest
00:00:38.347  + DIR_ROOT=/var/jenkins/workspace/nvme-phy-autotest
00:00:38.347  + [[ -n /var/jenkins/workspace/nvme-phy-autotest ]]
00:00:38.347  + DIR_SPDK=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:38.347  + DIR_OUTPUT=/var/jenkins/workspace/nvme-phy-autotest/output
00:00:38.347  + [[ -d /var/jenkins/workspace/nvme-phy-autotest/spdk ]]
00:00:38.347  + [[ ! -d /var/jenkins/workspace/nvme-phy-autotest/output ]]
00:00:38.347  + mkdir -p /var/jenkins/workspace/nvme-phy-autotest/output
00:00:38.347  + [[ -d /var/jenkins/workspace/nvme-phy-autotest/output ]]
00:00:38.347  + [[ nvme-phy-autotest == pkgdep-* ]]
00:00:38.347  + cd /var/jenkins/workspace/nvme-phy-autotest
00:00:38.347  + source /etc/os-release
00:00:38.347  ++ NAME='Fedora Linux'
00:00:38.347  ++ VERSION='39 (Cloud Edition)'
00:00:38.347  ++ ID=fedora
00:00:38.347  ++ VERSION_ID=39
00:00:38.347  ++ VERSION_CODENAME=
00:00:38.347  ++ PLATFORM_ID=platform:f39
00:00:38.347  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:00:38.347  ++ ANSI_COLOR='0;38;2;60;110;180'
00:00:38.347  ++ LOGO=fedora-logo-icon
00:00:38.347  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:00:38.347  ++ HOME_URL=https://fedoraproject.org/
00:00:38.347  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:00:38.347  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:00:38.347  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:00:38.347  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:00:38.347  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:00:38.347  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:00:38.347  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:00:38.347  ++ SUPPORT_END=2024-11-12
00:00:38.347  ++ VARIANT='Cloud Edition'
00:00:38.347  ++ VARIANT_ID=cloud
00:00:38.347  + uname -a
00:00:38.347  Linux spdk-wfp-45 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux
00:00:38.347  + sudo /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:00:40.885  Hugepages
00:00:40.885  node     hugesize     free /  total
00:00:40.885  node0   1048576kB        0 /      0
00:00:40.885  node0      2048kB        0 /      0
00:00:40.885  node1   1048576kB        0 /      0
00:00:40.885  node1      2048kB        0 /      0
00:00:40.885  
00:00:40.885  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:00:40.885  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:00:40.885  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:00:41.145  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:00:41.145  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:00:41.145  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:00:41.145  + rm -f /tmp/spdk-ld-path
00:00:41.145  + source autorun-spdk.conf
00:00:41.145  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:00:41.145  ++ SPDK_TEST_IOAT=1
00:00:41.145  ++ SPDK_TEST_NVME=1
00:00:41.145  ++ SPDK_TEST_NVME_CLI=1
00:00:41.145  ++ SPDK_TEST_OCF=1
00:00:41.145  ++ SPDK_RUN_UBSAN=1
00:00:41.145  ++ SPDK_TEST_NVME_CUSE=1
00:00:41.145  ++ SPDK_TEST_SCHEDULER=1
00:00:41.145  ++ RUN_NIGHTLY=1
00:00:41.145  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:00:41.145  + [[ -n '' ]]
00:00:41.145  + sudo git config --global --add safe.directory /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:41.145  + for M in /var/spdk/build-*-manifest.txt
00:00:41.145  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:00:41.145  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:00:41.145  + for M in /var/spdk/build-*-manifest.txt
00:00:41.145  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:00:41.145  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:00:41.145  + for M in /var/spdk/build-*-manifest.txt
00:00:41.145  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:00:41.145  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvme-phy-autotest/output/
00:00:41.145  ++ uname
00:00:41.145  + [[ Linux == \L\i\n\u\x ]]
00:00:41.145  + sudo dmesg -T
00:00:41.145  + sudo dmesg --clear
00:00:41.145  + dmesg_pid=1980308
00:00:41.145  + [[ Fedora Linux == FreeBSD ]]
00:00:41.145  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:00:41.145  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:00:41.145  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:00:41.145  + [[ -x /usr/src/fio-static/fio ]]
00:00:41.145  + export FIO_BIN=/usr/src/fio-static/fio
00:00:41.145  + FIO_BIN=/usr/src/fio-static/fio
00:00:41.145  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\e\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:00:41.145  + sudo dmesg -Tw
00:00:41.145  + [[ ! -v VFIO_QEMU_BIN ]]
00:00:41.145  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:00:41.145  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:00:41.145  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:00:41.145  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:00:41.145  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:00:41.145  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:00:41.145  + spdk/autorun.sh /var/jenkins/workspace/nvme-phy-autotest/autorun-spdk.conf
00:00:41.145  Test configuration:
00:00:41.145  SPDK_RUN_FUNCTIONAL_TEST=1
00:00:41.145  SPDK_TEST_IOAT=1
00:00:41.145  SPDK_TEST_NVME=1
00:00:41.145  SPDK_TEST_NVME_CLI=1
00:00:41.145  SPDK_TEST_OCF=1
00:00:41.145  SPDK_RUN_UBSAN=1
00:00:41.145  SPDK_TEST_NVME_CUSE=1
00:00:41.145  SPDK_TEST_SCHEDULER=1
00:00:41.405  RUN_NIGHTLY=1   10:38:30	-- common/autotest_common.sh@1689 -- $ [[ n == y ]]
00:00:41.405    10:38:30	-- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:00:41.405     10:38:30	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:00:41.406     10:38:30	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:00:41.406     10:38:30	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:00:41.406      10:38:30	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:41.406      10:38:30	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:41.406      10:38:30	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:41.406      10:38:30	-- paths/export.sh@5 -- $ export PATH
00:00:41.406      10:38:30	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:00:41.406    10:38:30	-- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:00:41.406      10:38:30	-- common/autobuild_common.sh@440 -- $ date +%s
00:00:41.406     10:38:30	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734255510.XXXXXX
00:00:41.406    10:38:30	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734255510.xX9JMP
00:00:41.406    10:38:30	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:00:41.406    10:38:30	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:00:41.406    10:38:30	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/'
00:00:41.406    10:38:30	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp'
00:00:41.406    10:38:30	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:00:41.406     10:38:30	-- common/autobuild_common.sh@456 -- $ get_config_params
00:00:41.406     10:38:30	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:00:41.406     10:38:30	-- common/autotest_common.sh@10 -- $ set +x
00:00:41.406    10:38:30	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk'
00:00:41.406   10:38:30	-- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:00:41.406   10:38:30	-- spdk/autobuild.sh@12 -- $ umask 022
00:00:41.406   10:38:30	-- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:00:41.406   10:38:30	-- spdk/autobuild.sh@16 -- $ date -u
00:00:41.406  Sun Dec 15 09:38:30 AM UTC 2024
00:00:41.406   10:38:30	-- spdk/autobuild.sh@17 -- $ git describe --tags
00:00:41.406  LTS-67-gc13c99a5e
00:00:41.406   10:38:30	-- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:00:41.406   10:38:30	-- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:00:41.406   10:38:30	-- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:00:41.406   10:38:30	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:00:41.406   10:38:30	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:00:41.406   10:38:30	-- common/autotest_common.sh@10 -- $ set +x
00:00:41.406  ************************************
00:00:41.406  START TEST ubsan
00:00:41.406  ************************************
00:00:41.406   10:38:30	-- common/autotest_common.sh@1114 -- $ echo 'using ubsan'
00:00:41.406  using ubsan
00:00:41.406  
00:00:41.406  real	0m0.000s
00:00:41.406  user	0m0.000s
00:00:41.406  sys	0m0.000s
00:00:41.406   10:38:30	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:00:41.406   10:38:30	-- common/autotest_common.sh@10 -- $ set +x
00:00:41.406  ************************************
00:00:41.406  END TEST ubsan
00:00:41.406  ************************************
00:00:41.406   10:38:30	-- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:00:41.406   10:38:30	-- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:00:41.406   10:38:30	-- spdk/autobuild.sh@47 -- $ [[ 1 -eq 1 ]]
00:00:41.406   10:38:30	-- spdk/autobuild.sh@48 -- $ ocf_precompile
00:00:41.406   10:38:30	-- common/autobuild_common.sh@424 -- $ run_test autobuild_ocf_precompile _ocf_precompile
00:00:41.406   10:38:30	-- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']'
00:00:41.406   10:38:30	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:00:41.406   10:38:30	-- common/autotest_common.sh@10 -- $ set +x
00:00:41.406  ************************************
00:00:41.406  START TEST autobuild_ocf_precompile
00:00:41.406  ************************************
00:00:41.406   10:38:30	-- common/autotest_common.sh@1114 -- $ _ocf_precompile
00:00:41.406    10:38:30	-- common/autobuild_common.sh@21 -- $ echo --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk
00:00:41.406    10:38:30	-- common/autobuild_common.sh@21 -- $ sed s/--enable-coverage//g
00:00:41.406   10:38:30	-- common/autobuild_common.sh@21 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --with-ublk
00:00:41.666  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:00:41.666  Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build
00:00:41.925  Using 'verbs' RDMA provider
00:00:57.387  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:01:12.444  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:01:12.444  Creating mk/config.mk...done.
00:01:12.444  Creating mk/cc.flags.mk...done.
00:01:12.444  Type 'make' to build.
00:01:12.444   10:38:59	-- common/autobuild_common.sh@22 -- $ make -j72 include/spdk/config.h
00:01:12.444   10:38:59	-- common/autobuild_common.sh@23 -- $ CC=gcc
00:01:12.444   10:38:59	-- common/autobuild_common.sh@23 -- $ CCAR=ar
00:01:12.444   10:38:59	-- common/autobuild_common.sh@23 -- $ make -j72 -C /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf exportlib O=/var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:01:12.444  make: Entering directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf'
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_ctx.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_metadata.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_composite_volume.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_queue.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/promotion/nhit.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_core.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_mngt.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_debug.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/alru.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/cleaning/acp.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_err.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_stats.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io_class.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_types.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cleaner.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cache.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_def.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_volume.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_io.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_logger.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/include/ocf/ocf_cfg.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_volume_priv.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io_allocator.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_async_lock.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_stats.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_realloc.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_pipeline.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_alock.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_list.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_parallelize.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_rbtree.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cleaner.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_cache_line.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_generator.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_user_part.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_io.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_refcnt.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/utils/utils_request.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_structs.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/nhit/nhit_hash.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.c
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/ops.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/promotion/promotion.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_priv.h
00:01:12.444   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_builder.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_queue_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_logger.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_misc.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_cache.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_common.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core_pool.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_metadata.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_io_class.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_core.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/mngt/ocf_mngt_flush.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_dynamic.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_status.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_eviction_policy.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_internal.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_misc.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_bit.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_collision.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_segment_id.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_superblock.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_common.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_structs.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_partition_structs.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cleaning_policy.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_cache_line.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_io.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_volatile.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_raw_atomic.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_core.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/metadata/metadata_passive_update.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru_structs.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning_ops.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop_structs.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/nop.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/cleaning.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/alru.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp_structs.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/cleaning/acp.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_seq_cutoff.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_core.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wi.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_inv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wo.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_zero.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wt.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_fast.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_ops.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_common.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_discard.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_debug.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wa.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_bf.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_wb.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_d2c.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_pt.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/cache_engine.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/engine/engine_rd.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_ctx_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_stats_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_space.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_def_priv.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_io_class.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume.c
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.h
00:01:12.445   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.h
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_concurrency.h
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_mio_concurrency.h
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/concurrency/ocf_pio_concurrency.h
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_cache.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_request.c
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_lru_structs.h
00:01:12.446   INSTALL  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf/src/ocf/ocf_composite_volume_priv.h
00:01:12.446    CC env_ocf/mpool.o
00:01:12.446    CC env_ocf/ocf_env.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_async_lock.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_pipeline.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_alock.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_cache_line.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_realloc.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_rbtree.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_generator.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_user_part.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_list.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_parallelize.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_cleaner.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_io.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_refcnt.o
00:01:12.446    CC env_ocf/src/ocf/utils/utils_request.o
00:01:12.446    CC env_ocf/src/ocf/ocf_volume.o
00:01:12.446    CC env_ocf/src/ocf/promotion/nhit/nhit_hash.o
00:01:12.446    CC env_ocf/src/ocf/promotion/nhit/nhit.o
00:01:12.446    CC env_ocf/src/ocf/promotion/promotion.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_misc.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_cache.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_common.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_core_pool.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_io_class.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_core.o
00:01:12.446    CC env_ocf/src/ocf/mngt/ocf_mngt_flush.o
00:01:12.446    CC env_ocf/src/ocf/ocf_queue.o
00:01:12.446    CC env_ocf/src/ocf/ocf_stats_builder.o
00:01:12.446    CC env_ocf/src/ocf/ocf_logger.o
00:01:12.446    CC env_ocf/src/ocf/ocf_metadata.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_raw.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_eviction_policy.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_segment.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_partition.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_collision.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_raw_dynamic.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_misc.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_superblock.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_cleaning_policy.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_raw_atomic.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_raw_volatile.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_io.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_core.o
00:01:12.446    CC env_ocf/src/ocf/metadata/metadata_passive_update.o
00:01:12.446    CC env_ocf/src/ocf/cleaning/nop.o
00:01:12.446    CC env_ocf/src/ocf/cleaning/alru.o
00:01:12.446    CC env_ocf/src/ocf/cleaning/acp.o
00:01:12.446    CC env_ocf/src/ocf/cleaning/cleaning.o
00:01:12.446    CC env_ocf/src/ocf/ocf_seq_cutoff.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_bf.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_fast.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_wo.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_inv.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_ops.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_wi.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_discard.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_common.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_zero.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_wt.o
00:01:12.446    CC env_ocf/src/ocf/engine/cache_engine.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_wa.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_wb.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_d2c.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_pt.o
00:01:12.446    CC env_ocf/src/ocf/engine/engine_rd.o
00:01:12.446    CC env_ocf/src/ocf/ocf_core.o
00:01:12.446    CC env_ocf/src/ocf/ocf_stats.o
00:01:12.446    CC env_ocf/src/ocf/ocf_io.o
00:01:12.446    CC env_ocf/src/ocf/ocf_lru.o
00:01:12.446    CC env_ocf/src/ocf/ocf_ctx.o
00:01:12.446    CC env_ocf/src/ocf/ocf_space.o
00:01:12.446    CC env_ocf/src/ocf/concurrency/ocf_mio_concurrency.o
00:01:12.446    CC env_ocf/src/ocf/concurrency/ocf_concurrency.o
00:01:12.446    CC env_ocf/src/ocf/concurrency/ocf_pio_concurrency.o
00:01:12.446    CC env_ocf/src/ocf/concurrency/ocf_cache_line_concurrency.o
00:01:12.446    CC env_ocf/src/ocf/concurrency/ocf_metadata_concurrency.o
00:01:12.446    CC env_ocf/src/ocf/ocf_io_class.o
00:01:12.446    CC env_ocf/src/ocf/ocf_composite_volume.o
00:01:12.446    CC env_ocf/src/ocf/ocf_cache.o
00:01:12.446    CC env_ocf/src/ocf/ocf_request.o
00:01:13.015    LIB libspdk_ocfenv.a
00:01:13.275  cp /var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a /var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:01:13.275  make: Leaving directory '/var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_ocf'
00:01:13.275   10:39:02	-- common/autobuild_common.sh@25 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a'
00:01:13.275   10:39:02	-- common/autobuild_common.sh@27 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a
00:01:13.275  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:01:13.275  Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build
00:01:13.844  Using 'verbs' RDMA provider
00:01:26.633  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:01:38.850  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:01:38.850  Creating mk/config.mk...done.
00:01:38.850  Creating mk/cc.flags.mk...done.
00:01:38.850  Type 'make' to build.
00:01:38.850  
00:01:38.850  real	0m57.306s
00:01:38.850  user	0m55.909s
00:01:38.850  sys	0m40.768s
00:01:38.850   10:39:27	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:01:38.850   10:39:27	-- common/autotest_common.sh@10 -- $ set +x
00:01:38.850  ************************************
00:01:38.850  END TEST autobuild_ocf_precompile
00:01:38.850  ************************************
00:01:38.850   10:39:27	-- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:01:38.850   10:39:27	-- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:01:38.850   10:39:27	-- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:01:38.850   10:39:27	-- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:01:38.850   10:39:27	-- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:01:38.850   10:39:27	-- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvme-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk --with-ocf=//var/jenkins/workspace/nvme-phy-autotest/spdk/ocf.a --with-shared
00:01:38.850  Using default SPDK env in /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk
00:01:38.850  Using default DPDK in /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build
00:01:39.419  Using 'verbs' RDMA provider
00:01:52.203  Configuring ISA-L (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l/spdk-isal.log)...done.
00:02:04.419  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvme-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done.
00:02:04.419  Creating mk/config.mk...done.
00:02:04.419  Creating mk/cc.flags.mk...done.
00:02:04.419  Type 'make' to build.
00:02:04.419   10:39:52	-- spdk/autobuild.sh@69 -- $ run_test make make -j72
00:02:04.419   10:39:52	-- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']'
00:02:04.419   10:39:52	-- common/autotest_common.sh@1093 -- $ xtrace_disable
00:02:04.419   10:39:52	-- common/autotest_common.sh@10 -- $ set +x
00:02:04.419  ************************************
00:02:04.419  START TEST make
00:02:04.419  ************************************
00:02:04.419   10:39:52	-- common/autotest_common.sh@1114 -- $ make -j72
00:02:04.419  make[1]: Nothing to be done for 'all'.
00:02:14.414  The Meson build system
00:02:14.414  Version: 1.5.0
00:02:14.414  Source dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk
00:02:14.414  Build dir: /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp
00:02:14.414  Build type: native build
00:02:14.414  Program cat found: YES (/usr/bin/cat)
00:02:14.414  Project name: DPDK
00:02:14.414  Project version: 23.11.0
00:02:14.414  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:02:14.414  C linker for the host machine: cc ld.bfd 2.40-14
00:02:14.414  Host machine cpu family: x86_64
00:02:14.414  Host machine cpu: x86_64
00:02:14.414  Message: ## Building in Developer Mode ##
00:02:14.414  Program pkg-config found: YES (/usr/bin/pkg-config)
00:02:14.414  Program check-symbols.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:02:14.414  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:14.414  Program python3 found: YES (/usr/bin/python3)
00:02:14.414  Program cat found: YES (/usr/bin/cat)
00:02:14.414  Compiler for C supports arguments -march=native: YES 
00:02:14.414  Checking for size of "void *" : 8 
00:02:14.414  Checking for size of "void *" : 8 (cached)
00:02:14.414  Library m found: YES
00:02:14.414  Library numa found: YES
00:02:14.414  Has header "numaif.h" : YES 
00:02:14.414  Library fdt found: NO
00:02:14.414  Library execinfo found: NO
00:02:14.414  Has header "execinfo.h" : YES 
00:02:14.414  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:02:14.414  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:14.414  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:14.414  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:14.414  Run-time dependency openssl found: YES 3.1.1
00:02:14.414  Run-time dependency libpcap found: YES 1.10.4
00:02:14.414  Has header "pcap.h" with dependency libpcap: YES 
00:02:14.414  Compiler for C supports arguments -Wcast-qual: YES 
00:02:14.414  Compiler for C supports arguments -Wdeprecated: YES 
00:02:14.414  Compiler for C supports arguments -Wformat: YES 
00:02:14.414  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:02:14.414  Compiler for C supports arguments -Wformat-security: NO 
00:02:14.414  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:14.414  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:14.414  Compiler for C supports arguments -Wnested-externs: YES 
00:02:14.414  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:14.414  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:14.414  Compiler for C supports arguments -Wsign-compare: YES 
00:02:14.414  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:14.414  Compiler for C supports arguments -Wundef: YES 
00:02:14.414  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:14.414  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:14.414  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:02:14.414  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:14.414  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:02:14.414  Program objdump found: YES (/usr/bin/objdump)
00:02:14.414  Compiler for C supports arguments -mavx512f: YES 
00:02:14.414  Checking if "AVX512 checking" compiles: YES 
00:02:14.414  Fetching value of define "__SSE4_2__" : 1 
00:02:14.414  Fetching value of define "__AES__" : 1 
00:02:14.414  Fetching value of define "__AVX__" : 1 
00:02:14.414  Fetching value of define "__AVX2__" : 1 
00:02:14.414  Fetching value of define "__AVX512BW__" : 1 
00:02:14.414  Fetching value of define "__AVX512CD__" : 1 
00:02:14.415  Fetching value of define "__AVX512DQ__" : 1 
00:02:14.415  Fetching value of define "__AVX512F__" : 1 
00:02:14.415  Fetching value of define "__AVX512VL__" : 1 
00:02:14.415  Fetching value of define "__PCLMUL__" : 1 
00:02:14.415  Fetching value of define "__RDRND__" : 1 
00:02:14.415  Fetching value of define "__RDSEED__" : 1 
00:02:14.415  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:02:14.415  Fetching value of define "__znver1__" : (undefined) 
00:02:14.415  Fetching value of define "__znver2__" : (undefined) 
00:02:14.415  Fetching value of define "__znver3__" : (undefined) 
00:02:14.415  Fetching value of define "__znver4__" : (undefined) 
00:02:14.415  Compiler for C supports arguments -Wno-format-truncation: YES 
00:02:14.415  Message: lib/log: Defining dependency "log"
00:02:14.415  Message: lib/kvargs: Defining dependency "kvargs"
00:02:14.415  Message: lib/telemetry: Defining dependency "telemetry"
00:02:14.415  Checking for function "getentropy" : NO 
00:02:14.415  Message: lib/eal: Defining dependency "eal"
00:02:14.415  Message: lib/ring: Defining dependency "ring"
00:02:14.415  Message: lib/rcu: Defining dependency "rcu"
00:02:14.415  Message: lib/mempool: Defining dependency "mempool"
00:02:14.415  Message: lib/mbuf: Defining dependency "mbuf"
00:02:14.415  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:14.415  Fetching value of define "__AVX512F__" : 1 (cached)
00:02:14.415  Fetching value of define "__AVX512BW__" : 1 (cached)
00:02:14.415  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:02:14.415  Fetching value of define "__AVX512VL__" : 1 (cached)
00:02:14.415  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:02:14.415  Compiler for C supports arguments -mpclmul: YES 
00:02:14.415  Compiler for C supports arguments -maes: YES 
00:02:14.415  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:14.415  Compiler for C supports arguments -mavx512bw: YES 
00:02:14.415  Compiler for C supports arguments -mavx512dq: YES 
00:02:14.415  Compiler for C supports arguments -mavx512vl: YES 
00:02:14.415  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:14.415  Compiler for C supports arguments -mavx2: YES 
00:02:14.415  Compiler for C supports arguments -mavx: YES 
00:02:14.415  Message: lib/net: Defining dependency "net"
00:02:14.415  Message: lib/meter: Defining dependency "meter"
00:02:14.415  Message: lib/ethdev: Defining dependency "ethdev"
00:02:14.415  Message: lib/pci: Defining dependency "pci"
00:02:14.415  Message: lib/cmdline: Defining dependency "cmdline"
00:02:14.415  Message: lib/hash: Defining dependency "hash"
00:02:14.415  Message: lib/timer: Defining dependency "timer"
00:02:14.415  Message: lib/compressdev: Defining dependency "compressdev"
00:02:14.415  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:14.415  Message: lib/dmadev: Defining dependency "dmadev"
00:02:14.415  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:14.415  Message: lib/power: Defining dependency "power"
00:02:14.415  Message: lib/reorder: Defining dependency "reorder"
00:02:14.415  Message: lib/security: Defining dependency "security"
00:02:14.415  Has header "linux/userfaultfd.h" : YES 
00:02:14.415  Has header "linux/vduse.h" : YES 
00:02:14.415  Message: lib/vhost: Defining dependency "vhost"
00:02:14.415  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:02:14.415  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:14.415  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:14.415  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:14.415  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:14.415  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:14.415  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:14.415  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:14.415  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:14.415  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:14.415  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:14.415  Configuring doxy-api-html.conf using configuration
00:02:14.415  Configuring doxy-api-man.conf using configuration
00:02:14.415  Program mandb found: YES (/usr/bin/mandb)
00:02:14.415  Program sphinx-build found: NO
00:02:14.415  Configuring rte_build_config.h using configuration
00:02:14.415  Message: 
00:02:14.415  =================
00:02:14.415  Applications Enabled
00:02:14.415  =================
00:02:14.415  
00:02:14.415  apps:
00:02:14.415  	
00:02:14.415  
00:02:14.415  Message: 
00:02:14.415  =================
00:02:14.415  Libraries Enabled
00:02:14.415  =================
00:02:14.415  
00:02:14.415  libs:
00:02:14.415  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:14.415  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:14.415  	cryptodev, dmadev, power, reorder, security, vhost, 
00:02:14.415  
00:02:14.415  Message: 
00:02:14.415  ===============
00:02:14.415  Drivers Enabled
00:02:14.415  ===============
00:02:14.415  
00:02:14.415  common:
00:02:14.415  	
00:02:14.415  bus:
00:02:14.415  	pci, vdev, 
00:02:14.415  mempool:
00:02:14.415  	ring, 
00:02:14.415  dma:
00:02:14.415  	
00:02:14.415  net:
00:02:14.415  	
00:02:14.415  crypto:
00:02:14.415  	
00:02:14.415  compress:
00:02:14.415  	
00:02:14.415  vdpa:
00:02:14.415  	
00:02:14.415  
00:02:14.415  Message: 
00:02:14.415  =================
00:02:14.415  Content Skipped
00:02:14.415  =================
00:02:14.415  
00:02:14.415  apps:
00:02:14.415  	dumpcap:	explicitly disabled via build config
00:02:14.415  	graph:	explicitly disabled via build config
00:02:14.415  	pdump:	explicitly disabled via build config
00:02:14.415  	proc-info:	explicitly disabled via build config
00:02:14.415  	test-acl:	explicitly disabled via build config
00:02:14.415  	test-bbdev:	explicitly disabled via build config
00:02:14.415  	test-cmdline:	explicitly disabled via build config
00:02:14.415  	test-compress-perf:	explicitly disabled via build config
00:02:14.415  	test-crypto-perf:	explicitly disabled via build config
00:02:14.415  	test-dma-perf:	explicitly disabled via build config
00:02:14.415  	test-eventdev:	explicitly disabled via build config
00:02:14.415  	test-fib:	explicitly disabled via build config
00:02:14.415  	test-flow-perf:	explicitly disabled via build config
00:02:14.415  	test-gpudev:	explicitly disabled via build config
00:02:14.415  	test-mldev:	explicitly disabled via build config
00:02:14.415  	test-pipeline:	explicitly disabled via build config
00:02:14.415  	test-pmd:	explicitly disabled via build config
00:02:14.415  	test-regex:	explicitly disabled via build config
00:02:14.415  	test-sad:	explicitly disabled via build config
00:02:14.415  	test-security-perf:	explicitly disabled via build config
00:02:14.415  	
00:02:14.415  libs:
00:02:14.415  	metrics:	explicitly disabled via build config
00:02:14.415  	acl:	explicitly disabled via build config
00:02:14.415  	bbdev:	explicitly disabled via build config
00:02:14.415  	bitratestats:	explicitly disabled via build config
00:02:14.415  	bpf:	explicitly disabled via build config
00:02:14.415  	cfgfile:	explicitly disabled via build config
00:02:14.415  	distributor:	explicitly disabled via build config
00:02:14.415  	efd:	explicitly disabled via build config
00:02:14.415  	eventdev:	explicitly disabled via build config
00:02:14.415  	dispatcher:	explicitly disabled via build config
00:02:14.415  	gpudev:	explicitly disabled via build config
00:02:14.415  	gro:	explicitly disabled via build config
00:02:14.415  	gso:	explicitly disabled via build config
00:02:14.415  	ip_frag:	explicitly disabled via build config
00:02:14.415  	jobstats:	explicitly disabled via build config
00:02:14.415  	latencystats:	explicitly disabled via build config
00:02:14.415  	lpm:	explicitly disabled via build config
00:02:14.415  	member:	explicitly disabled via build config
00:02:14.415  	pcapng:	explicitly disabled via build config
00:02:14.415  	rawdev:	explicitly disabled via build config
00:02:14.415  	regexdev:	explicitly disabled via build config
00:02:14.415  	mldev:	explicitly disabled via build config
00:02:14.415  	rib:	explicitly disabled via build config
00:02:14.415  	sched:	explicitly disabled via build config
00:02:14.415  	stack:	explicitly disabled via build config
00:02:14.415  	ipsec:	explicitly disabled via build config
00:02:14.415  	pdcp:	explicitly disabled via build config
00:02:14.415  	fib:	explicitly disabled via build config
00:02:14.415  	port:	explicitly disabled via build config
00:02:14.415  	pdump:	explicitly disabled via build config
00:02:14.415  	table:	explicitly disabled via build config
00:02:14.415  	pipeline:	explicitly disabled via build config
00:02:14.415  	graph:	explicitly disabled via build config
00:02:14.415  	node:	explicitly disabled via build config
00:02:14.415  	
00:02:14.415  drivers:
00:02:14.415  	common/cpt:	not in enabled drivers build config
00:02:14.415  	common/dpaax:	not in enabled drivers build config
00:02:14.415  	common/iavf:	not in enabled drivers build config
00:02:14.415  	common/idpf:	not in enabled drivers build config
00:02:14.415  	common/mvep:	not in enabled drivers build config
00:02:14.415  	common/octeontx:	not in enabled drivers build config
00:02:14.415  	bus/auxiliary:	not in enabled drivers build config
00:02:14.415  	bus/cdx:	not in enabled drivers build config
00:02:14.415  	bus/dpaa:	not in enabled drivers build config
00:02:14.415  	bus/fslmc:	not in enabled drivers build config
00:02:14.415  	bus/ifpga:	not in enabled drivers build config
00:02:14.415  	bus/platform:	not in enabled drivers build config
00:02:14.415  	bus/vmbus:	not in enabled drivers build config
00:02:14.415  	common/cnxk:	not in enabled drivers build config
00:02:14.415  	common/mlx5:	not in enabled drivers build config
00:02:14.415  	common/nfp:	not in enabled drivers build config
00:02:14.415  	common/qat:	not in enabled drivers build config
00:02:14.415  	common/sfc_efx:	not in enabled drivers build config
00:02:14.416  	mempool/bucket:	not in enabled drivers build config
00:02:14.416  	mempool/cnxk:	not in enabled drivers build config
00:02:14.416  	mempool/dpaa:	not in enabled drivers build config
00:02:14.416  	mempool/dpaa2:	not in enabled drivers build config
00:02:14.416  	mempool/octeontx:	not in enabled drivers build config
00:02:14.416  	mempool/stack:	not in enabled drivers build config
00:02:14.416  	dma/cnxk:	not in enabled drivers build config
00:02:14.416  	dma/dpaa:	not in enabled drivers build config
00:02:14.416  	dma/dpaa2:	not in enabled drivers build config
00:02:14.416  	dma/hisilicon:	not in enabled drivers build config
00:02:14.416  	dma/idxd:	not in enabled drivers build config
00:02:14.416  	dma/ioat:	not in enabled drivers build config
00:02:14.416  	dma/skeleton:	not in enabled drivers build config
00:02:14.416  	net/af_packet:	not in enabled drivers build config
00:02:14.416  	net/af_xdp:	not in enabled drivers build config
00:02:14.416  	net/ark:	not in enabled drivers build config
00:02:14.416  	net/atlantic:	not in enabled drivers build config
00:02:14.416  	net/avp:	not in enabled drivers build config
00:02:14.416  	net/axgbe:	not in enabled drivers build config
00:02:14.416  	net/bnx2x:	not in enabled drivers build config
00:02:14.416  	net/bnxt:	not in enabled drivers build config
00:02:14.416  	net/bonding:	not in enabled drivers build config
00:02:14.416  	net/cnxk:	not in enabled drivers build config
00:02:14.416  	net/cpfl:	not in enabled drivers build config
00:02:14.416  	net/cxgbe:	not in enabled drivers build config
00:02:14.416  	net/dpaa:	not in enabled drivers build config
00:02:14.416  	net/dpaa2:	not in enabled drivers build config
00:02:14.416  	net/e1000:	not in enabled drivers build config
00:02:14.416  	net/ena:	not in enabled drivers build config
00:02:14.416  	net/enetc:	not in enabled drivers build config
00:02:14.416  	net/enetfec:	not in enabled drivers build config
00:02:14.416  	net/enic:	not in enabled drivers build config
00:02:14.416  	net/failsafe:	not in enabled drivers build config
00:02:14.416  	net/fm10k:	not in enabled drivers build config
00:02:14.416  	net/gve:	not in enabled drivers build config
00:02:14.416  	net/hinic:	not in enabled drivers build config
00:02:14.416  	net/hns3:	not in enabled drivers build config
00:02:14.416  	net/i40e:	not in enabled drivers build config
00:02:14.416  	net/iavf:	not in enabled drivers build config
00:02:14.416  	net/ice:	not in enabled drivers build config
00:02:14.416  	net/idpf:	not in enabled drivers build config
00:02:14.416  	net/igc:	not in enabled drivers build config
00:02:14.416  	net/ionic:	not in enabled drivers build config
00:02:14.416  	net/ipn3ke:	not in enabled drivers build config
00:02:14.416  	net/ixgbe:	not in enabled drivers build config
00:02:14.416  	net/mana:	not in enabled drivers build config
00:02:14.416  	net/memif:	not in enabled drivers build config
00:02:14.416  	net/mlx4:	not in enabled drivers build config
00:02:14.416  	net/mlx5:	not in enabled drivers build config
00:02:14.416  	net/mvneta:	not in enabled drivers build config
00:02:14.416  	net/mvpp2:	not in enabled drivers build config
00:02:14.416  	net/netvsc:	not in enabled drivers build config
00:02:14.416  	net/nfb:	not in enabled drivers build config
00:02:14.416  	net/nfp:	not in enabled drivers build config
00:02:14.416  	net/ngbe:	not in enabled drivers build config
00:02:14.416  	net/null:	not in enabled drivers build config
00:02:14.416  	net/octeontx:	not in enabled drivers build config
00:02:14.416  	net/octeon_ep:	not in enabled drivers build config
00:02:14.416  	net/pcap:	not in enabled drivers build config
00:02:14.416  	net/pfe:	not in enabled drivers build config
00:02:14.416  	net/qede:	not in enabled drivers build config
00:02:14.416  	net/ring:	not in enabled drivers build config
00:02:14.416  	net/sfc:	not in enabled drivers build config
00:02:14.416  	net/softnic:	not in enabled drivers build config
00:02:14.416  	net/tap:	not in enabled drivers build config
00:02:14.416  	net/thunderx:	not in enabled drivers build config
00:02:14.416  	net/txgbe:	not in enabled drivers build config
00:02:14.416  	net/vdev_netvsc:	not in enabled drivers build config
00:02:14.416  	net/vhost:	not in enabled drivers build config
00:02:14.416  	net/virtio:	not in enabled drivers build config
00:02:14.416  	net/vmxnet3:	not in enabled drivers build config
00:02:14.416  	raw/*:	missing internal dependency, "rawdev"
00:02:14.416  	crypto/armv8:	not in enabled drivers build config
00:02:14.416  	crypto/bcmfs:	not in enabled drivers build config
00:02:14.416  	crypto/caam_jr:	not in enabled drivers build config
00:02:14.416  	crypto/ccp:	not in enabled drivers build config
00:02:14.416  	crypto/cnxk:	not in enabled drivers build config
00:02:14.416  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:14.416  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:14.416  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:14.416  	crypto/mlx5:	not in enabled drivers build config
00:02:14.416  	crypto/mvsam:	not in enabled drivers build config
00:02:14.416  	crypto/nitrox:	not in enabled drivers build config
00:02:14.416  	crypto/null:	not in enabled drivers build config
00:02:14.416  	crypto/octeontx:	not in enabled drivers build config
00:02:14.416  	crypto/openssl:	not in enabled drivers build config
00:02:14.416  	crypto/scheduler:	not in enabled drivers build config
00:02:14.416  	crypto/uadk:	not in enabled drivers build config
00:02:14.416  	crypto/virtio:	not in enabled drivers build config
00:02:14.416  	compress/isal:	not in enabled drivers build config
00:02:14.416  	compress/mlx5:	not in enabled drivers build config
00:02:14.416  	compress/octeontx:	not in enabled drivers build config
00:02:14.416  	compress/zlib:	not in enabled drivers build config
00:02:14.416  	regex/*:	missing internal dependency, "regexdev"
00:02:14.416  	ml/*:	missing internal dependency, "mldev"
00:02:14.416  	vdpa/ifc:	not in enabled drivers build config
00:02:14.416  	vdpa/mlx5:	not in enabled drivers build config
00:02:14.416  	vdpa/nfp:	not in enabled drivers build config
00:02:14.416  	vdpa/sfc:	not in enabled drivers build config
00:02:14.416  	event/*:	missing internal dependency, "eventdev"
00:02:14.416  	baseband/*:	missing internal dependency, "bbdev"
00:02:14.416  	gpu/*:	missing internal dependency, "gpudev"
00:02:14.416  	
00:02:14.416  
00:02:14.416  Build targets in project: 85
00:02:14.416  
00:02:14.416  DPDK 23.11.0
00:02:14.416  
00:02:14.416    User defined options
00:02:14.416      buildtype          : debug
00:02:14.416      default_library    : shared
00:02:14.416      libdir             : lib
00:02:14.416      prefix             : /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build
00:02:14.416      c_args             : -fPIC -Werror  -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds
00:02:14.416      c_link_args        : 
00:02:14.416      cpu_instruction_set: native
00:02:14.416      disable_apps       : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev
00:02:14.416      disable_libs       : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev
00:02:14.416      enable_docs        : false
00:02:14.416      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring
00:02:14.416      enable_kmods       : false
00:02:14.416      tests              : false
00:02:14.416  
00:02:14.416  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:02:14.682  ninja: Entering directory `/var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp'
00:02:14.682  [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:14.682  [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:14.944  [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:02:14.944  [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:14.944  [5/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:14.944  [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:14.944  [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:14.944  [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:14.944  [9/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:14.944  [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:14.944  [11/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:14.944  [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:14.944  [13/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:14.944  [14/265] Linking static target lib/librte_kvargs.a
00:02:14.944  [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:14.944  [16/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:02:14.944  [17/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:14.944  [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:14.944  [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:14.944  [20/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:14.944  [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:14.944  [22/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:02:14.944  [23/265] Linking static target lib/librte_log.a
00:02:14.944  [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:02:14.944  [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:15.203  [26/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:15.203  [27/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:15.203  [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:15.203  [29/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:02:15.463  [30/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:15.463  [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:15.463  [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:15.463  [33/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:02:15.463  [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:15.463  [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:15.463  [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:15.463  [37/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:15.463  [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:15.463  [39/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:15.463  [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:15.463  [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:15.463  [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:15.463  [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:15.463  [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:02:15.463  [45/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:15.463  [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:15.463  [47/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.463  [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:15.463  [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:15.463  [50/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:15.463  [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:15.463  [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:15.463  [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:02:15.463  [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:15.463  [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:15.463  [56/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:15.463  [57/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:15.463  [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:15.463  [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:15.463  [60/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:15.463  [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:15.463  [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:15.463  [63/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:15.463  [64/265] Linking static target lib/librte_pci.a
00:02:15.463  [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:02:15.463  [66/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:15.463  [67/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:15.463  [68/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:15.463  [69/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:15.463  [70/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:15.463  [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:15.463  [72/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:15.463  [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:15.463  [74/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:15.463  [75/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:15.463  [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:15.463  [77/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:15.463  [78/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:15.463  [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:15.463  [80/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:15.463  [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:15.463  [82/265] Linking static target lib/librte_telemetry.a
00:02:15.463  [83/265] Linking static target lib/librte_ring.a
00:02:15.463  [84/265] Linking static target lib/librte_meter.a
00:02:15.463  [85/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:15.463  [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:02:15.464  [87/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:15.725  [88/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:15.726  [89/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:15.726  [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:15.726  [91/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:15.726  [92/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:15.726  [93/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:02:15.726  [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:15.726  [95/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:02:15.726  [96/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:02:15.726  [97/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:15.726  [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a
00:02:15.726  [99/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:15.726  [100/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:15.726  [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:15.726  [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:15.726  [103/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:15.726  [104/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:02:15.726  [105/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:15.726  [106/265] Linking static target lib/librte_mempool.a
00:02:15.726  [107/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:15.726  [108/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:15.726  [109/265] Linking static target lib/librte_rcu.a
00:02:15.726  [110/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:15.726  [111/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:15.726  [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:15.726  [113/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:15.985  [114/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.985  [115/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.985  [116/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:15.985  [117/265] Linking static target lib/librte_net.a
00:02:15.985  [118/265] Linking target lib/librte_log.so.24.0
00:02:15.985  [119/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.985  [120/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:15.985  [121/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.985  [122/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:15.985  [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:15.985  [124/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:15.985  [125/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:15.985  [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:15.985  [127/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:15.985  [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:15.985  [129/265] Linking static target lib/librte_mbuf.a
00:02:15.985  [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:15.985  [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:15.985  [132/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:15.985  [133/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:02:16.243  [134/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:16.243  [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:16.243  [136/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:16.243  [137/265] Linking static target lib/librte_cmdline.a
00:02:16.243  [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:16.243  [139/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.243  [140/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:02:16.243  [141/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:02:16.243  [142/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:16.243  [143/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols
00:02:16.243  [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:16.243  [145/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:16.243  [146/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:16.243  [147/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.243  [148/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:16.243  [149/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:02:16.243  [150/265] Linking static target lib/librte_timer.a
00:02:16.243  [151/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:16.243  [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:16.243  [153/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:02:16.243  [154/265] Linking target lib/librte_kvargs.so.24.0
00:02:16.243  [155/265] Linking static target lib/librte_dmadev.a
00:02:16.243  [156/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:16.243  [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:16.243  [158/265] Linking target lib/librte_telemetry.so.24.0
00:02:16.243  [159/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:16.243  [160/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:16.243  [161/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:16.243  [162/265] Linking static target lib/librte_eal.a
00:02:16.243  [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:16.243  [164/265] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:16.243  [165/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:16.243  [166/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:16.243  [167/265] Linking static target lib/librte_compressdev.a
00:02:16.243  [168/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.243  [169/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:16.243  [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:16.243  [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:16.243  [172/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:16.243  [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:16.243  [174/265] Linking static target lib/librte_power.a
00:02:16.243  [175/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols
00:02:16.243  [176/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:16.501  [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:16.501  [178/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols
00:02:16.501  [179/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:02:16.501  [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:02:16.501  [181/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:16.501  [182/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:16.501  [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:16.501  [184/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:16.501  [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:16.501  [186/265] Linking static target lib/librte_reorder.a
00:02:16.501  [187/265] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:16.501  [188/265] Linking static target lib/librte_security.a
00:02:16.501  [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:16.501  [190/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:16.501  [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:16.501  [192/265] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:16.501  [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:16.501  [194/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:16.501  [195/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:16.501  [196/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:16.501  [197/265] Linking static target drivers/librte_bus_vdev.a
00:02:16.501  [198/265] Linking static target lib/librte_hash.a
00:02:16.501  [199/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:16.501  [200/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.760  [201/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:16.760  [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:16.760  [203/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:16.760  [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:16.760  [205/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:16.760  [206/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:16.760  [207/265] Linking static target drivers/librte_bus_pci.a
00:02:16.760  [208/265] Linking static target lib/librte_cryptodev.a
00:02:16.760  [209/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:16.760  [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:16.760  [211/265] Linking static target drivers/librte_mempool_ring.a
00:02:16.760  [212/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.019  [213/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.019  [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.019  [215/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.019  [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.019  [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.278  [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:17.278  [219/265] Linking static target lib/librte_ethdev.a
00:02:17.278  [220/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.537  [221/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:17.537  [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.537  [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.537  [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:18.916  [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:18.916  [226/265] Linking static target lib/librte_vhost.a
00:02:18.916  [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:21.452  [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:26.726  [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:30.017  [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:30.017  [231/265] Linking target lib/librte_eal.so.24.0
00:02:30.017  [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols
00:02:30.017  [233/265] Linking target lib/librte_meter.so.24.0
00:02:30.017  [234/265] Linking target lib/librte_ring.so.24.0
00:02:30.017  [235/265] Linking target lib/librte_pci.so.24.0
00:02:30.017  [236/265] Linking target lib/librte_dmadev.so.24.0
00:02:30.017  [237/265] Linking target drivers/librte_bus_vdev.so.24.0
00:02:30.017  [238/265] Linking target lib/librte_timer.so.24.0
00:02:30.017  [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols
00:02:30.017  [240/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols
00:02:30.017  [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols
00:02:30.017  [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols
00:02:30.017  [243/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols
00:02:30.017  [244/265] Linking target lib/librte_mempool.so.24.0
00:02:30.017  [245/265] Linking target drivers/librte_bus_pci.so.24.0
00:02:30.017  [246/265] Linking target lib/librte_rcu.so.24.0
00:02:30.017  [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols
00:02:30.276  [248/265] Linking target lib/librte_mbuf.so.24.0
00:02:30.276  [249/265] Linking target drivers/librte_mempool_ring.so.24.0
00:02:30.276  [250/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols
00:02:30.535  [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols
00:02:30.535  [252/265] Linking target lib/librte_compressdev.so.24.0
00:02:30.535  [253/265] Linking target lib/librte_net.so.24.0
00:02:30.535  [254/265] Linking target lib/librte_reorder.so.24.0
00:02:30.535  [255/265] Linking target lib/librte_cryptodev.so.24.0
00:02:30.794  [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols
00:02:30.794  [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols
00:02:30.794  [258/265] Linking target lib/librte_cmdline.so.24.0
00:02:30.794  [259/265] Linking target lib/librte_security.so.24.0
00:02:30.794  [260/265] Linking target lib/librte_ethdev.so.24.0
00:02:30.794  [261/265] Linking target lib/librte_hash.so.24.0
00:02:31.053  [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols
00:02:31.053  [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols
00:02:31.053  [264/265] Linking target lib/librte_power.so.24.0
00:02:31.312  [265/265] Linking target lib/librte_vhost.so.24.0
00:02:31.312  INFO: autodetecting backend as ninja
00:02:31.312  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/build-tmp -j 72
00:02:32.691    CC lib/ut_mock/mock.o
00:02:32.691    CC lib/log/log.o
00:02:32.691    CC lib/log/log_flags.o
00:02:32.691    CC lib/log/log_deprecated.o
00:02:32.692  make[3]: '/var/jenkins/workspace/nvme-phy-autotest/spdk/build/lib/libspdk_ocfenv.a' is up to date.
00:02:32.692    CC lib/ut/ut.o
00:02:32.692    LIB libspdk_ut.a
00:02:32.692    SO libspdk_ut.so.1.0
00:02:32.692    SYMLINK libspdk_ut.so
00:02:32.692    LIB libspdk_ut_mock.a
00:02:32.692    SO libspdk_ut_mock.so.5.0
00:02:32.692    LIB libspdk_log.a
00:02:32.692    SO libspdk_log.so.6.1
00:02:32.951    SYMLINK libspdk_ut_mock.so
00:02:32.951    SYMLINK libspdk_log.so
00:02:33.210    CC lib/ioat/ioat.o
00:02:33.210    CXX lib/trace_parser/trace.o
00:02:33.210    CC lib/dma/dma.o
00:02:33.210    CC lib/util/base64.o
00:02:33.210    CC lib/util/bit_array.o
00:02:33.210    CC lib/util/cpuset.o
00:02:33.210    CC lib/util/crc32c.o
00:02:33.210    CC lib/util/crc16.o
00:02:33.210    CC lib/util/crc32.o
00:02:33.210    CC lib/util/crc32_ieee.o
00:02:33.210    CC lib/util/crc64.o
00:02:33.210    CC lib/util/dif.o
00:02:33.210    CC lib/util/fd.o
00:02:33.210    CC lib/util/file.o
00:02:33.210    CC lib/util/hexlify.o
00:02:33.210    CC lib/util/iov.o
00:02:33.210    CC lib/util/math.o
00:02:33.210    CC lib/util/pipe.o
00:02:33.210    CC lib/util/strerror_tls.o
00:02:33.210    CC lib/util/string.o
00:02:33.210    CC lib/util/uuid.o
00:02:33.210    CC lib/util/fd_group.o
00:02:33.210    CC lib/util/xor.o
00:02:33.210    CC lib/util/zipf.o
00:02:33.210    CC lib/vfio_user/host/vfio_user_pci.o
00:02:33.210    CC lib/vfio_user/host/vfio_user.o
00:02:33.469    LIB libspdk_dma.a
00:02:33.469    SO libspdk_dma.so.3.0
00:02:33.469    LIB libspdk_ioat.a
00:02:33.469    SYMLINK libspdk_dma.so
00:02:33.469    SO libspdk_ioat.so.6.0
00:02:33.469    SYMLINK libspdk_ioat.so
00:02:33.728    LIB libspdk_vfio_user.a
00:02:33.728    SO libspdk_vfio_user.so.4.0
00:02:33.728    LIB libspdk_util.a
00:02:33.728    SYMLINK libspdk_vfio_user.so
00:02:33.728    SO libspdk_util.so.8.0
00:02:33.988    SYMLINK libspdk_util.so
00:02:33.988    LIB libspdk_trace_parser.a
00:02:33.988    SO libspdk_trace_parser.so.4.0
00:02:34.247    CC lib/json/json_parse.o
00:02:34.247    CC lib/rdma/common.o
00:02:34.247    CC lib/vmd/vmd.o
00:02:34.247    CC lib/vmd/led.o
00:02:34.247    CC lib/json/json_util.o
00:02:34.247    CC lib/rdma/rdma_verbs.o
00:02:34.247    CC lib/json/json_write.o
00:02:34.247    CC lib/conf/conf.o
00:02:34.247    CC lib/env_dpdk/pci.o
00:02:34.247    CC lib/env_dpdk/memory.o
00:02:34.247    CC lib/env_dpdk/env.o
00:02:34.247    CC lib/env_dpdk/init.o
00:02:34.247    CC lib/env_dpdk/threads.o
00:02:34.247    CC lib/idxd/idxd.o
00:02:34.247    CC lib/env_dpdk/pci_ioat.o
00:02:34.247    CC lib/idxd/idxd_user.o
00:02:34.247    CC lib/env_dpdk/pci_virtio.o
00:02:34.247    CC lib/idxd/idxd_kernel.o
00:02:34.247    CC lib/env_dpdk/pci_vmd.o
00:02:34.247    CC lib/env_dpdk/pci_idxd.o
00:02:34.247    CC lib/env_dpdk/pci_event.o
00:02:34.247    CC lib/env_dpdk/sigbus_handler.o
00:02:34.247    CC lib/env_dpdk/pci_dpdk.o
00:02:34.247    CC lib/env_dpdk/pci_dpdk_2211.o
00:02:34.247    CC lib/env_dpdk/pci_dpdk_2207.o
00:02:34.247    SYMLINK libspdk_trace_parser.so
00:02:34.507    LIB libspdk_rdma.a
00:02:34.507    LIB libspdk_conf.a
00:02:34.507    SO libspdk_rdma.so.5.0
00:02:34.507    LIB libspdk_json.a
00:02:34.507    SO libspdk_conf.so.5.0
00:02:34.507    SO libspdk_json.so.5.1
00:02:34.766    SYMLINK libspdk_rdma.so
00:02:34.766    SYMLINK libspdk_conf.so
00:02:34.766    SYMLINK libspdk_json.so
00:02:34.766    LIB libspdk_idxd.a
00:02:34.766    SO libspdk_idxd.so.11.0
00:02:34.766    LIB libspdk_vmd.a
00:02:35.025    CC lib/jsonrpc/jsonrpc_server.o
00:02:35.025    CC lib/jsonrpc/jsonrpc_client.o
00:02:35.025    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:02:35.025    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:02:35.025    SYMLINK libspdk_idxd.so
00:02:35.025    SO libspdk_vmd.so.5.0
00:02:35.025    SYMLINK libspdk_vmd.so
00:02:35.594    LIB libspdk_jsonrpc.a
00:02:35.594    SO libspdk_jsonrpc.so.5.1
00:02:35.594    SYMLINK libspdk_jsonrpc.so
00:02:35.594    LIB libspdk_env_dpdk.a
00:02:35.853    SO libspdk_env_dpdk.so.13.0
00:02:35.853    CC lib/rpc/rpc.o
00:02:35.853    SYMLINK libspdk_env_dpdk.so
00:02:36.112    LIB libspdk_rpc.a
00:02:36.112    SO libspdk_rpc.so.5.0
00:02:36.112    SYMLINK libspdk_rpc.so
00:02:36.371    CC lib/trace/trace.o
00:02:36.371    CC lib/trace/trace_flags.o
00:02:36.371    CC lib/sock/sock.o
00:02:36.371    CC lib/trace/trace_rpc.o
00:02:36.371    CC lib/sock/sock_rpc.o
00:02:36.371    CC lib/notify/notify.o
00:02:36.371    CC lib/notify/notify_rpc.o
00:02:36.631    LIB libspdk_notify.a
00:02:36.631    SO libspdk_notify.so.5.0
00:02:36.631    LIB libspdk_trace.a
00:02:36.631    SYMLINK libspdk_notify.so
00:02:36.631    SO libspdk_trace.so.9.0
00:02:36.890    SYMLINK libspdk_trace.so
00:02:36.890    LIB libspdk_sock.a
00:02:36.890    SO libspdk_sock.so.8.0
00:02:36.890    SYMLINK libspdk_sock.so
00:02:37.149    CC lib/thread/thread.o
00:02:37.149    CC lib/thread/iobuf.o
00:02:37.149    CC lib/nvme/nvme_ctrlr_cmd.o
00:02:37.149    CC lib/nvme/nvme_ctrlr.o
00:02:37.149    CC lib/nvme/nvme_fabric.o
00:02:37.149    CC lib/nvme/nvme_ns_cmd.o
00:02:37.149    CC lib/nvme/nvme_ns.o
00:02:37.149    CC lib/nvme/nvme_pcie_common.o
00:02:37.149    CC lib/nvme/nvme.o
00:02:37.149    CC lib/nvme/nvme_pcie.o
00:02:37.149    CC lib/nvme/nvme_qpair.o
00:02:37.149    CC lib/nvme/nvme_quirks.o
00:02:37.149    CC lib/nvme/nvme_transport.o
00:02:37.149    CC lib/nvme/nvme_discovery.o
00:02:37.149    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:02:37.149    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:02:37.149    CC lib/nvme/nvme_tcp.o
00:02:37.149    CC lib/nvme/nvme_opal.o
00:02:37.149    CC lib/nvme/nvme_io_msg.o
00:02:37.149    CC lib/nvme/nvme_poll_group.o
00:02:37.149    CC lib/nvme/nvme_zns.o
00:02:37.149    CC lib/nvme/nvme_cuse.o
00:02:37.149    CC lib/nvme/nvme_vfio_user.o
00:02:37.149    CC lib/nvme/nvme_rdma.o
00:02:38.524    LIB libspdk_thread.a
00:02:38.783    SO libspdk_thread.so.9.0
00:02:38.783    SYMLINK libspdk_thread.so
00:02:39.041    CC lib/init/json_config.o
00:02:39.041    CC lib/blob/blobstore.o
00:02:39.041    CC lib/init/subsystem.o
00:02:39.041    CC lib/blob/request.o
00:02:39.041    CC lib/init/subsystem_rpc.o
00:02:39.041    CC lib/blob/zeroes.o
00:02:39.041    CC lib/init/rpc.o
00:02:39.041    CC lib/blob/blob_bs_dev.o
00:02:39.041    CC lib/virtio/virtio.o
00:02:39.041    CC lib/accel/accel.o
00:02:39.041    CC lib/virtio/virtio_vhost_user.o
00:02:39.041    CC lib/virtio/virtio_vfio_user.o
00:02:39.041    CC lib/accel/accel_rpc.o
00:02:39.041    CC lib/accel/accel_sw.o
00:02:39.041    CC lib/virtio/virtio_pci.o
00:02:39.300    LIB libspdk_init.a
00:02:39.300    SO libspdk_init.so.4.0
00:02:39.300    LIB libspdk_nvme.a
00:02:39.300    SYMLINK libspdk_init.so
00:02:39.559    SO libspdk_nvme.so.12.0
00:02:39.559    CC lib/event/app.o
00:02:39.559    CC lib/event/reactor.o
00:02:39.559    CC lib/event/log_rpc.o
00:02:39.559    CC lib/event/app_rpc.o
00:02:39.559    CC lib/event/scheduler_static.o
00:02:39.818    LIB libspdk_virtio.a
00:02:39.818    SO libspdk_virtio.so.6.0
00:02:39.818    SYMLINK libspdk_nvme.so
00:02:39.818    SYMLINK libspdk_virtio.so
00:02:40.077    LIB libspdk_accel.a
00:02:40.077    SO libspdk_accel.so.14.0
00:02:40.077    LIB libspdk_event.a
00:02:40.077    SYMLINK libspdk_accel.so
00:02:40.077    SO libspdk_event.so.12.0
00:02:40.334    SYMLINK libspdk_event.so
00:02:40.334    CC lib/bdev/bdev.o
00:02:40.334    CC lib/bdev/bdev_rpc.o
00:02:40.334    CC lib/bdev/bdev_zone.o
00:02:40.334    CC lib/bdev/part.o
00:02:40.334    CC lib/bdev/scsi_nvme.o
00:02:42.239    LIB libspdk_blob.a
00:02:42.239    SO libspdk_blob.so.10.1
00:02:42.239    SYMLINK libspdk_blob.so
00:02:42.239    CC lib/blobfs/blobfs.o
00:02:42.239    CC lib/blobfs/tree.o
00:02:42.239    CC lib/lvol/lvol.o
00:02:42.808    LIB libspdk_blobfs.a
00:02:43.067    SO libspdk_blobfs.so.9.0
00:02:43.067    LIB libspdk_lvol.a
00:02:43.067    SO libspdk_lvol.so.9.1
00:02:43.067    SYMLINK libspdk_blobfs.so
00:02:43.067    LIB libspdk_bdev.a
00:02:43.067    SYMLINK libspdk_lvol.so
00:02:43.067    SO libspdk_bdev.so.14.0
00:02:43.326    SYMLINK libspdk_bdev.so
00:02:43.326    CC lib/nvmf/ctrlr_discovery.o
00:02:43.326    CC lib/nvmf/ctrlr.o
00:02:43.326    CC lib/nvmf/ctrlr_bdev.o
00:02:43.326    CC lib/nbd/nbd.o
00:02:43.326    CC lib/nvmf/subsystem.o
00:02:43.326    CC lib/nbd/nbd_rpc.o
00:02:43.326    CC lib/nvmf/nvmf.o
00:02:43.326    CC lib/nvmf/nvmf_rpc.o
00:02:43.326    CC lib/nvmf/tcp.o
00:02:43.326    CC lib/nvmf/transport.o
00:02:43.590    CC lib/nvmf/rdma.o
00:02:43.590    CC lib/ftl/ftl_core.o
00:02:43.590    CC lib/scsi/lun.o
00:02:43.590    CC lib/ftl/ftl_init.o
00:02:43.590    CC lib/ftl/ftl_layout.o
00:02:43.590    CC lib/scsi/dev.o
00:02:43.590    CC lib/ftl/ftl_debug.o
00:02:43.590    CC lib/scsi/scsi.o
00:02:43.590    CC lib/ftl/ftl_io.o
00:02:43.590    CC lib/scsi/port.o
00:02:43.590    CC lib/ftl/ftl_sb.o
00:02:43.590    CC lib/ublk/ublk.o
00:02:43.590    CC lib/ublk/ublk_rpc.o
00:02:43.590    CC lib/ftl/ftl_l2p.o
00:02:43.590    CC lib/scsi/scsi_bdev.o
00:02:43.590    CC lib/ftl/ftl_l2p_flat.o
00:02:43.590    CC lib/ftl/ftl_nv_cache.o
00:02:43.590    CC lib/scsi/scsi_pr.o
00:02:43.590    CC lib/ftl/ftl_band.o
00:02:43.590    CC lib/scsi/scsi_rpc.o
00:02:43.590    CC lib/scsi/task.o
00:02:43.590    CC lib/ftl/ftl_band_ops.o
00:02:43.590    CC lib/ftl/ftl_writer.o
00:02:43.590    CC lib/ftl/ftl_rq.o
00:02:43.590    CC lib/ftl/ftl_l2p_cache.o
00:02:43.590    CC lib/ftl/ftl_reloc.o
00:02:43.590    CC lib/ftl/ftl_p2l.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_md.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_startup.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_misc.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_band.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:02:43.590    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:02:43.590    CC lib/ftl/utils/ftl_conf.o
00:02:43.590    CC lib/ftl/utils/ftl_bitmap.o
00:02:43.590    CC lib/ftl/utils/ftl_md.o
00:02:43.590    CC lib/ftl/utils/ftl_mempool.o
00:02:43.590    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:02:43.590    CC lib/ftl/utils/ftl_property.o
00:02:43.590    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:02:43.590    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:02:43.590    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:02:43.590    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:02:43.590    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:02:43.590    CC lib/ftl/nvc/ftl_nvc_dev.o
00:02:43.590    CC lib/ftl/upgrade/ftl_sb_v3.o
00:02:43.590    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:02:43.590    CC lib/ftl/base/ftl_base_bdev.o
00:02:43.590    CC lib/ftl/base/ftl_base_dev.o
00:02:43.590    CC lib/ftl/upgrade/ftl_sb_v5.o
00:02:43.590    CC lib/ftl/ftl_trace.o
00:02:44.158    LIB libspdk_nbd.a
00:02:44.158    SO libspdk_nbd.so.6.0
00:02:44.158    SYMLINK libspdk_nbd.so
00:02:44.158    LIB libspdk_scsi.a
00:02:44.416    SO libspdk_scsi.so.8.0
00:02:44.416    LIB libspdk_ublk.a
00:02:44.416    SO libspdk_ublk.so.2.0
00:02:44.416    SYMLINK libspdk_ublk.so
00:02:44.416    SYMLINK libspdk_scsi.so
00:02:44.676    CC lib/vhost/vhost.o
00:02:44.676    CC lib/vhost/vhost_rpc.o
00:02:44.676    CC lib/vhost/rte_vhost_user.o
00:02:44.676    CC lib/vhost/vhost_scsi.o
00:02:44.676    CC lib/vhost/vhost_blk.o
00:02:44.676    CC lib/iscsi/conn.o
00:02:44.676    CC lib/iscsi/init_grp.o
00:02:44.676    CC lib/iscsi/iscsi.o
00:02:44.676    CC lib/iscsi/md5.o
00:02:44.676    CC lib/iscsi/param.o
00:02:44.676    CC lib/iscsi/portal_grp.o
00:02:44.676    CC lib/iscsi/tgt_node.o
00:02:44.676    CC lib/iscsi/iscsi_subsystem.o
00:02:44.676    CC lib/iscsi/iscsi_rpc.o
00:02:44.676    CC lib/iscsi/task.o
00:02:44.935    LIB libspdk_ftl.a
00:02:45.194    SO libspdk_ftl.so.8.0
00:02:45.194    LIB libspdk_nvmf.a
00:02:45.453    SO libspdk_nvmf.so.17.0
00:02:45.453    SYMLINK libspdk_ftl.so
00:02:45.712    SYMLINK libspdk_nvmf.so
00:02:45.971    LIB libspdk_vhost.a
00:02:45.971    SO libspdk_vhost.so.7.1
00:02:45.971    SYMLINK libspdk_vhost.so
00:02:46.230    LIB libspdk_iscsi.a
00:02:46.230    SO libspdk_iscsi.so.7.0
00:02:46.489    SYMLINK libspdk_iscsi.so
00:02:46.747    CC module/env_dpdk/env_dpdk_rpc.o
00:02:47.005    CC module/scheduler/dynamic/scheduler_dynamic.o
00:02:47.005    CC module/accel/ioat/accel_ioat.o
00:02:47.005    CC module/accel/ioat/accel_ioat_rpc.o
00:02:47.005    CC module/sock/posix/posix.o
00:02:47.005    CC module/blob/bdev/blob_bdev.o
00:02:47.005    CC module/accel/iaa/accel_iaa.o
00:02:47.005    CC module/accel/iaa/accel_iaa_rpc.o
00:02:47.005    CC module/accel/dsa/accel_dsa.o
00:02:47.005    CC module/accel/error/accel_error.o
00:02:47.005    CC module/accel/dsa/accel_dsa_rpc.o
00:02:47.005    CC module/scheduler/gscheduler/gscheduler.o
00:02:47.005    CC module/accel/error/accel_error_rpc.o
00:02:47.005    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:02:47.005    LIB libspdk_env_dpdk_rpc.a
00:02:47.005    SO libspdk_env_dpdk_rpc.so.5.0
00:02:47.005    LIB libspdk_scheduler_dpdk_governor.a
00:02:47.005    SYMLINK libspdk_env_dpdk_rpc.so
00:02:47.005    LIB libspdk_scheduler_dynamic.a
00:02:47.005    SO libspdk_scheduler_dpdk_governor.so.3.0
00:02:47.005    SO libspdk_scheduler_dynamic.so.3.0
00:02:47.263    LIB libspdk_accel_error.a
00:02:47.263    LIB libspdk_accel_ioat.a
00:02:47.263    LIB libspdk_scheduler_gscheduler.a
00:02:47.263    SYMLINK libspdk_scheduler_dpdk_governor.so
00:02:47.263    LIB libspdk_accel_iaa.a
00:02:47.263    SO libspdk_accel_error.so.1.0
00:02:47.263    SO libspdk_accel_ioat.so.5.0
00:02:47.263    LIB libspdk_blob_bdev.a
00:02:47.263    SO libspdk_scheduler_gscheduler.so.3.0
00:02:47.263    SYMLINK libspdk_scheduler_dynamic.so
00:02:47.263    SO libspdk_accel_iaa.so.2.0
00:02:47.263    LIB libspdk_accel_dsa.a
00:02:47.263    SO libspdk_blob_bdev.so.10.1
00:02:47.263    SYMLINK libspdk_accel_error.so
00:02:47.263    SYMLINK libspdk_accel_ioat.so
00:02:47.263    SYMLINK libspdk_scheduler_gscheduler.so
00:02:47.263    SO libspdk_accel_dsa.so.4.0
00:02:47.263    SYMLINK libspdk_accel_iaa.so
00:02:47.263    SYMLINK libspdk_blob_bdev.so
00:02:47.263    SYMLINK libspdk_accel_dsa.so
00:02:47.523    CC module/bdev/malloc/bdev_malloc.o
00:02:47.523    CC module/bdev/raid/bdev_raid.o
00:02:47.523    CC module/bdev/raid/bdev_raid_rpc.o
00:02:47.523    CC module/bdev/malloc/bdev_malloc_rpc.o
00:02:47.523    CC module/bdev/raid/bdev_raid_sb.o
00:02:47.523    CC module/bdev/raid/raid1.o
00:02:47.523    CC module/bdev/raid/raid0.o
00:02:47.523    CC module/bdev/raid/concat.o
00:02:47.523    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:02:47.523    CC module/bdev/nvme/bdev_nvme_rpc.o
00:02:47.523    CC module/bdev/nvme/bdev_nvme.o
00:02:47.523    CC module/bdev/nvme/nvme_rpc.o
00:02:47.523    CC module/bdev/lvol/vbdev_lvol.o
00:02:47.523    CC module/bdev/nvme/bdev_mdns_client.o
00:02:47.523    CC module/bdev/nvme/vbdev_opal.o
00:02:47.523    CC module/bdev/nvme/vbdev_opal_rpc.o
00:02:47.523    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:02:47.523    CC module/bdev/gpt/gpt.o
00:02:47.523    CC module/bdev/gpt/vbdev_gpt.o
00:02:47.523    CC module/bdev/error/vbdev_error.o
00:02:47.523    CC module/blobfs/bdev/blobfs_bdev.o
00:02:47.523    CC module/bdev/zone_block/vbdev_zone_block.o
00:02:47.523    CC module/bdev/virtio/bdev_virtio_scsi.o
00:02:47.523    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:02:47.523    CC module/bdev/delay/vbdev_delay_rpc.o
00:02:47.523    CC module/bdev/error/vbdev_error_rpc.o
00:02:47.523    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:02:47.523    CC module/bdev/delay/vbdev_delay.o
00:02:47.523    CC module/bdev/virtio/bdev_virtio_rpc.o
00:02:47.523    CC module/bdev/virtio/bdev_virtio_blk.o
00:02:47.523    CC module/bdev/aio/bdev_aio.o
00:02:47.523    CC module/bdev/null/bdev_null.o
00:02:47.523    CC module/bdev/null/bdev_null_rpc.o
00:02:47.523    CC module/bdev/aio/bdev_aio_rpc.o
00:02:47.523    CC module/bdev/ftl/bdev_ftl_rpc.o
00:02:47.523    CC module/bdev/ftl/bdev_ftl.o
00:02:47.782    CC module/bdev/iscsi/bdev_iscsi.o
00:02:47.783    CC module/bdev/split/vbdev_split.o
00:02:47.783    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:02:47.783    CC module/bdev/passthru/vbdev_passthru.o
00:02:47.783    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:02:47.783    CC module/bdev/split/vbdev_split_rpc.o
00:02:47.783    CC module/bdev/ocf/ctx.o
00:02:47.783    CC module/bdev/ocf/data.o
00:02:47.783    LIB libspdk_sock_posix.a
00:02:47.783    CC module/bdev/ocf/stats.o
00:02:47.783    CC module/bdev/ocf/utils.o
00:02:47.783    CC module/bdev/ocf/vbdev_ocf.o
00:02:47.783    CC module/bdev/ocf/vbdev_ocf_rpc.o
00:02:47.783    CC module/bdev/ocf/volume.o
00:02:47.783    SO libspdk_sock_posix.so.5.0
00:02:47.783    SYMLINK libspdk_sock_posix.so
00:02:47.783    LIB libspdk_blobfs_bdev.a
00:02:48.041    SO libspdk_blobfs_bdev.so.5.0
00:02:48.041    LIB libspdk_bdev_split.a
00:02:48.041    LIB libspdk_bdev_error.a
00:02:48.041    LIB libspdk_bdev_gpt.a
00:02:48.041    SO libspdk_bdev_split.so.5.0
00:02:48.041    SYMLINK libspdk_blobfs_bdev.so
00:02:48.041    SO libspdk_bdev_error.so.5.0
00:02:48.041    SO libspdk_bdev_gpt.so.5.0
00:02:48.041    LIB libspdk_bdev_aio.a
00:02:48.041    SYMLINK libspdk_bdev_split.so
00:02:48.041    SYMLINK libspdk_bdev_error.so
00:02:48.041    LIB libspdk_bdev_null.a
00:02:48.041    LIB libspdk_bdev_delay.a
00:02:48.041    SYMLINK libspdk_bdev_gpt.so
00:02:48.041    SO libspdk_bdev_aio.so.5.0
00:02:48.041    SO libspdk_bdev_null.so.5.0
00:02:48.041    SO libspdk_bdev_delay.so.5.0
00:02:48.041    LIB libspdk_bdev_ftl.a
00:02:48.300    SYMLINK libspdk_bdev_aio.so
00:02:48.300    SO libspdk_bdev_ftl.so.5.0
00:02:48.300    LIB libspdk_bdev_passthru.a
00:02:48.300    LIB libspdk_bdev_iscsi.a
00:02:48.300    LIB libspdk_bdev_zone_block.a
00:02:48.300    SYMLINK libspdk_bdev_null.so
00:02:48.300    SYMLINK libspdk_bdev_delay.so
00:02:48.300    SO libspdk_bdev_passthru.so.5.0
00:02:48.300    LIB libspdk_bdev_malloc.a
00:02:48.300    SO libspdk_bdev_iscsi.so.5.0
00:02:48.300    SO libspdk_bdev_zone_block.so.5.0
00:02:48.300    SYMLINK libspdk_bdev_ftl.so
00:02:48.300    SO libspdk_bdev_malloc.so.5.0
00:02:48.300    SYMLINK libspdk_bdev_passthru.so
00:02:48.300    LIB libspdk_bdev_virtio.a
00:02:48.300    LIB libspdk_bdev_lvol.a
00:02:48.300    SYMLINK libspdk_bdev_zone_block.so
00:02:48.300    SYMLINK libspdk_bdev_malloc.so
00:02:48.300    LIB libspdk_bdev_ocf.a
00:02:48.300    SO libspdk_bdev_lvol.so.5.0
00:02:48.300    SO libspdk_bdev_virtio.so.5.0
00:02:48.300    SYMLINK libspdk_bdev_iscsi.so
00:02:48.300    SO libspdk_bdev_ocf.so.5.0
00:02:48.559    SYMLINK libspdk_bdev_virtio.so
00:02:48.559    SYMLINK libspdk_bdev_lvol.so
00:02:48.559    SYMLINK libspdk_bdev_ocf.so
00:02:48.559    LIB libspdk_bdev_raid.a
00:02:48.559    SO libspdk_bdev_raid.so.5.0
00:02:48.818    SYMLINK libspdk_bdev_raid.so
00:02:50.197    LIB libspdk_bdev_nvme.a
00:02:50.197    SO libspdk_bdev_nvme.so.6.0
00:02:50.197    SYMLINK libspdk_bdev_nvme.so
00:02:50.765    CC module/event/subsystems/vmd/vmd.o
00:02:50.765    CC module/event/subsystems/vmd/vmd_rpc.o
00:02:50.765    CC module/event/subsystems/iobuf/iobuf.o
00:02:50.765    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:02:50.765    CC module/event/subsystems/sock/sock.o
00:02:50.765    CC module/event/subsystems/scheduler/scheduler.o
00:02:50.765    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:02:50.765    LIB libspdk_event_sock.a
00:02:50.765    LIB libspdk_event_scheduler.a
00:02:50.765    LIB libspdk_event_vhost_blk.a
00:02:50.765    LIB libspdk_event_iobuf.a
00:02:50.765    SO libspdk_event_sock.so.4.0
00:02:50.765    SO libspdk_event_scheduler.so.3.0
00:02:50.765    SO libspdk_event_vhost_blk.so.2.0
00:02:51.025    SO libspdk_event_iobuf.so.2.0
00:02:51.025    SYMLINK libspdk_event_sock.so
00:02:51.025    SYMLINK libspdk_event_scheduler.so
00:02:51.025    SYMLINK libspdk_event_vhost_blk.so
00:02:51.025    SYMLINK libspdk_event_iobuf.so
00:02:51.025    LIB libspdk_event_vmd.a
00:02:51.025    SO libspdk_event_vmd.so.5.0
00:02:51.025    SYMLINK libspdk_event_vmd.so
00:02:51.284    CC module/event/subsystems/accel/accel.o
00:02:51.284    LIB libspdk_event_accel.a
00:02:51.544    SO libspdk_event_accel.so.5.0
00:02:51.544    SYMLINK libspdk_event_accel.so
00:02:51.803    CC module/event/subsystems/bdev/bdev.o
00:02:51.803    LIB libspdk_event_bdev.a
00:02:52.062    SO libspdk_event_bdev.so.5.0
00:02:52.062    SYMLINK libspdk_event_bdev.so
00:02:52.322    CC module/event/subsystems/scsi/scsi.o
00:02:52.322    CC module/event/subsystems/nbd/nbd.o
00:02:52.322    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:02:52.322    CC module/event/subsystems/ublk/ublk.o
00:02:52.322    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:02:52.582    LIB libspdk_event_nbd.a
00:02:52.582    LIB libspdk_event_ublk.a
00:02:52.582    LIB libspdk_event_scsi.a
00:02:52.582    SO libspdk_event_ublk.so.2.0
00:02:52.582    SO libspdk_event_nbd.so.5.0
00:02:52.582    SO libspdk_event_scsi.so.5.0
00:02:52.582    SYMLINK libspdk_event_ublk.so
00:02:52.582    SYMLINK libspdk_event_scsi.so
00:02:52.582    SYMLINK libspdk_event_nbd.so
00:02:52.842    LIB libspdk_event_nvmf.a
00:02:52.842    CC module/event/subsystems/iscsi/iscsi.o
00:02:52.842    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:02:52.842    SO libspdk_event_nvmf.so.5.0
00:02:52.842    SYMLINK libspdk_event_nvmf.so
00:02:53.102    LIB libspdk_event_vhost_scsi.a
00:02:53.102    SO libspdk_event_vhost_scsi.so.2.0
00:02:53.102    SYMLINK libspdk_event_vhost_scsi.so
00:02:53.102    LIB libspdk_event_iscsi.a
00:02:53.362    SO libspdk_event_iscsi.so.5.0
00:02:53.362    SYMLINK libspdk_event_iscsi.so
00:02:53.362    SO libspdk.so.5.0
00:02:53.362    SYMLINK libspdk.so
00:02:53.625    CC app/trace_record/trace_record.o
00:02:53.625    CXX app/trace/trace.o
00:02:53.625    CC app/spdk_nvme_identify/identify.o
00:02:53.625    CC app/spdk_top/spdk_top.o
00:02:53.625    CC app/spdk_nvme_discover/discovery_aer.o
00:02:53.625    CC app/spdk_nvme_perf/perf.o
00:02:53.625    CC app/spdk_lspci/spdk_lspci.o
00:02:53.625    CC test/rpc_client/rpc_client_test.o
00:02:53.625    CC examples/interrupt_tgt/interrupt_tgt.o
00:02:53.625    TEST_HEADER include/spdk/accel.h
00:02:53.625    TEST_HEADER include/spdk/accel_module.h
00:02:53.886    CC app/spdk_dd/spdk_dd.o
00:02:53.886    TEST_HEADER include/spdk/assert.h
00:02:53.886    TEST_HEADER include/spdk/barrier.h
00:02:53.886    TEST_HEADER include/spdk/base64.h
00:02:53.886    TEST_HEADER include/spdk/bdev.h
00:02:53.886    TEST_HEADER include/spdk/bdev_module.h
00:02:53.886    TEST_HEADER include/spdk/bdev_zone.h
00:02:53.886    TEST_HEADER include/spdk/bit_array.h
00:02:53.886    TEST_HEADER include/spdk/bit_pool.h
00:02:53.886    CC app/nvmf_tgt/nvmf_main.o
00:02:53.886    TEST_HEADER include/spdk/blob_bdev.h
00:02:53.886    CC app/vhost/vhost.o
00:02:53.886    TEST_HEADER include/spdk/blobfs_bdev.h
00:02:53.886    TEST_HEADER include/spdk/blobfs.h
00:02:53.886    CC app/iscsi_tgt/iscsi_tgt.o
00:02:53.886    TEST_HEADER include/spdk/blob.h
00:02:53.886    CC examples/nvme/cmb_copy/cmb_copy.o
00:02:53.886    CC examples/nvme/abort/abort.o
00:02:53.886    CC examples/ioat/verify/verify.o
00:02:53.886    CC examples/ioat/perf/perf.o
00:02:53.886    TEST_HEADER include/spdk/conf.h
00:02:53.886    CC examples/vmd/led/led.o
00:02:53.887    CC examples/nvme/nvme_manage/nvme_manage.o
00:02:53.887    TEST_HEADER include/spdk/config.h
00:02:53.887    TEST_HEADER include/spdk/cpuset.h
00:02:53.887    TEST_HEADER include/spdk/crc16.h
00:02:53.887    TEST_HEADER include/spdk/crc32.h
00:02:53.887    CC examples/vmd/lsvmd/lsvmd.o
00:02:53.887    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:02:53.887    CC examples/nvme/arbitration/arbitration.o
00:02:53.887    CC examples/nvme/hello_world/hello_world.o
00:02:53.887    CC test/env/memory/memory_ut.o
00:02:53.887    CC examples/accel/perf/accel_perf.o
00:02:53.887    TEST_HEADER include/spdk/crc64.h
00:02:53.887    CC test/app/jsoncat/jsoncat.o
00:02:53.887    CC examples/util/zipf/zipf.o
00:02:53.887    TEST_HEADER include/spdk/dif.h
00:02:53.887    CC test/event/reactor_perf/reactor_perf.o
00:02:53.887    CC examples/sock/hello_world/hello_sock.o
00:02:53.887    CC app/spdk_tgt/spdk_tgt.o
00:02:53.887    CC test/app/stub/stub.o
00:02:53.887    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:02:53.887    CC app/fio/nvme/fio_plugin.o
00:02:53.887    CC examples/nvme/hotplug/hotplug.o
00:02:53.887    TEST_HEADER include/spdk/dma.h
00:02:53.887    CC test/event/event_perf/event_perf.o
00:02:53.887    CC test/env/pci/pci_ut.o
00:02:53.887    TEST_HEADER include/spdk/endian.h
00:02:53.887    CC test/app/histogram_perf/histogram_perf.o
00:02:53.887    CC test/nvme/err_injection/err_injection.o
00:02:53.887    TEST_HEADER include/spdk/env_dpdk.h
00:02:53.887    TEST_HEADER include/spdk/env.h
00:02:53.887    CC test/event/reactor/reactor.o
00:02:53.887    CC examples/idxd/perf/perf.o
00:02:53.887    CC test/nvme/aer/aer.o
00:02:53.887    TEST_HEADER include/spdk/event.h
00:02:53.887    CC test/nvme/e2edp/nvme_dp.o
00:02:53.887    CC test/nvme/sgl/sgl.o
00:02:53.887    TEST_HEADER include/spdk/fd_group.h
00:02:53.887    CC examples/nvme/reconnect/reconnect.o
00:02:53.887    CC test/env/vtophys/vtophys.o
00:02:53.887    TEST_HEADER include/spdk/fd.h
00:02:53.887    CC test/nvme/reserve/reserve.o
00:02:53.887    TEST_HEADER include/spdk/file.h
00:02:53.887    CC test/nvme/connect_stress/connect_stress.o
00:02:53.887    CC test/nvme/compliance/nvme_compliance.o
00:02:53.887    CC test/nvme/reset/reset.o
00:02:53.887    TEST_HEADER include/spdk/ftl.h
00:02:53.887    CC test/thread/poller_perf/poller_perf.o
00:02:53.887    CC test/nvme/boot_partition/boot_partition.o
00:02:53.887    TEST_HEADER include/spdk/gpt_spec.h
00:02:53.887    CC test/event/app_repeat/app_repeat.o
00:02:53.887    CC test/nvme/startup/startup.o
00:02:53.887    CC examples/blob/hello_world/hello_blob.o
00:02:53.887    CC examples/blob/cli/blobcli.o
00:02:53.887    CC test/nvme/simple_copy/simple_copy.o
00:02:53.887    TEST_HEADER include/spdk/hexlify.h
00:02:53.887    CC test/bdev/bdevio/bdevio.o
00:02:53.887    CC examples/nvmf/nvmf/nvmf.o
00:02:53.887    TEST_HEADER include/spdk/histogram_data.h
00:02:53.887    CC examples/bdev/hello_world/hello_bdev.o
00:02:53.887    TEST_HEADER include/spdk/idxd.h
00:02:53.887    CC test/accel/dif/dif.o
00:02:53.887    CC app/fio/bdev/fio_plugin.o
00:02:53.887    CC test/nvme/overhead/overhead.o
00:02:53.887    CC examples/thread/thread/thread_ex.o
00:02:53.887    TEST_HEADER include/spdk/idxd_spec.h
00:02:53.887    TEST_HEADER include/spdk/init.h
00:02:53.887    CC test/app/bdev_svc/bdev_svc.o
00:02:53.887    TEST_HEADER include/spdk/ioat.h
00:02:53.887    CC test/blobfs/mkfs/mkfs.o
00:02:53.887    TEST_HEADER include/spdk/ioat_spec.h
00:02:53.887    CC test/event/scheduler/scheduler.o
00:02:53.887    TEST_HEADER include/spdk/iscsi_spec.h
00:02:53.887    TEST_HEADER include/spdk/json.h
00:02:53.887    CC test/dma/test_dma/test_dma.o
00:02:53.887    TEST_HEADER include/spdk/jsonrpc.h
00:02:53.887    CC examples/bdev/bdevperf/bdevperf.o
00:02:53.887    TEST_HEADER include/spdk/likely.h
00:02:53.887    TEST_HEADER include/spdk/log.h
00:02:53.887    TEST_HEADER include/spdk/lvol.h
00:02:53.887    TEST_HEADER include/spdk/memory.h
00:02:53.887    TEST_HEADER include/spdk/mmio.h
00:02:54.154    TEST_HEADER include/spdk/nbd.h
00:02:54.154    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:02:54.154    TEST_HEADER include/spdk/notify.h
00:02:54.154    LINK spdk_lspci
00:02:54.154    TEST_HEADER include/spdk/nvme.h
00:02:54.154    TEST_HEADER include/spdk/nvme_intel.h
00:02:54.154    TEST_HEADER include/spdk/nvme_ocssd.h
00:02:54.154    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:02:54.154    TEST_HEADER include/spdk/nvme_spec.h
00:02:54.154    TEST_HEADER include/spdk/nvme_zns.h
00:02:54.154    TEST_HEADER include/spdk/nvmf_cmd.h
00:02:54.154    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:02:54.154    CC test/env/mem_callbacks/mem_callbacks.o
00:02:54.154    CC test/lvol/esnap/esnap.o
00:02:54.154    TEST_HEADER include/spdk/nvmf.h
00:02:54.154    TEST_HEADER include/spdk/nvmf_spec.h
00:02:54.154    TEST_HEADER include/spdk/nvmf_transport.h
00:02:54.154    TEST_HEADER include/spdk/opal.h
00:02:54.154    LINK spdk_nvme_discover
00:02:54.154    TEST_HEADER include/spdk/opal_spec.h
00:02:54.154    LINK interrupt_tgt
00:02:54.154    TEST_HEADER include/spdk/pci_ids.h
00:02:54.154    TEST_HEADER include/spdk/pipe.h
00:02:54.154    LINK reactor
00:02:54.154    TEST_HEADER include/spdk/queue.h
00:02:54.154    TEST_HEADER include/spdk/reduce.h
00:02:54.154    TEST_HEADER include/spdk/rpc.h
00:02:54.154    TEST_HEADER include/spdk/scheduler.h
00:02:54.154    TEST_HEADER include/spdk/scsi.h
00:02:54.154    TEST_HEADER include/spdk/scsi_spec.h
00:02:54.154    LINK reactor_perf
00:02:54.154    TEST_HEADER include/spdk/sock.h
00:02:54.154    LINK rpc_client_test
00:02:54.154    TEST_HEADER include/spdk/stdinc.h
00:02:54.154    TEST_HEADER include/spdk/string.h
00:02:54.154    LINK nvmf_tgt
00:02:54.154    TEST_HEADER include/spdk/thread.h
00:02:54.154    TEST_HEADER include/spdk/trace.h
00:02:54.154    LINK env_dpdk_post_init
00:02:54.154    TEST_HEADER include/spdk/trace_parser.h
00:02:54.154    LINK cmb_copy
00:02:54.154    TEST_HEADER include/spdk/tree.h
00:02:54.154    TEST_HEADER include/spdk/ublk.h
00:02:54.154    TEST_HEADER include/spdk/util.h
00:02:54.154    TEST_HEADER include/spdk/uuid.h
00:02:54.154    LINK lsvmd
00:02:54.154    TEST_HEADER include/spdk/version.h
00:02:54.154    TEST_HEADER include/spdk/vfio_user_pci.h
00:02:54.154    LINK led
00:02:54.154    TEST_HEADER include/spdk/vfio_user_spec.h
00:02:54.154    TEST_HEADER include/spdk/vhost.h
00:02:54.154    LINK jsoncat
00:02:54.154    TEST_HEADER include/spdk/vmd.h
00:02:54.154    LINK stub
00:02:54.154    TEST_HEADER include/spdk/xor.h
00:02:54.154    LINK app_repeat
00:02:54.154    LINK zipf
00:02:54.154    TEST_HEADER include/spdk/zipf.h
00:02:54.154    LINK boot_partition
00:02:54.420    CXX test/cpp_headers/accel.o
00:02:54.420    LINK event_perf
00:02:54.420    LINK vtophys
00:02:54.420    LINK err_injection
00:02:54.420    LINK spdk_trace_record
00:02:54.420    LINK iscsi_tgt
00:02:54.420    LINK pmr_persistence
00:02:54.420    LINK startup
00:02:54.420    LINK bdev_svc
00:02:54.420    LINK spdk_tgt
00:02:54.420    LINK reserve
00:02:54.420    LINK vhost
00:02:54.421    LINK connect_stress
00:02:54.421    LINK hello_sock
00:02:54.421    LINK ioat_perf
00:02:54.421    LINK hello_world
00:02:54.421    LINK verify
00:02:54.421    LINK reset
00:02:54.421    LINK hotplug
00:02:54.421    LINK hello_blob
00:02:54.421    LINK simple_copy
00:02:54.421    LINK sgl
00:02:54.421    LINK hello_bdev
00:02:54.421    LINK mkfs
00:02:54.421    LINK arbitration
00:02:54.421    LINK scheduler
00:02:54.421    LINK histogram_perf
00:02:54.421    LINK nvme_compliance
00:02:54.421    LINK poller_perf
00:02:54.421    LINK reconnect
00:02:54.421    LINK overhead
00:02:54.421    CXX test/cpp_headers/accel_module.o
00:02:54.421    LINK aer
00:02:54.688    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:02:54.688    LINK pci_ut
00:02:54.688    LINK thread
00:02:54.688    LINK idxd_perf
00:02:54.688    LINK spdk_dd
00:02:54.688    CXX test/cpp_headers/assert.o
00:02:54.688    LINK abort
00:02:54.688    CXX test/cpp_headers/barrier.o
00:02:54.688    CXX test/cpp_headers/base64.o
00:02:54.688    CXX test/cpp_headers/bdev.o
00:02:54.688    CXX test/cpp_headers/bdev_module.o
00:02:54.688    CC test/nvme/fused_ordering/fused_ordering.o
00:02:54.688    CXX test/cpp_headers/bdev_zone.o
00:02:54.688    LINK nvme_dp
00:02:54.688    CXX test/cpp_headers/bit_array.o
00:02:54.688    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:02:54.688    CXX test/cpp_headers/bit_pool.o
00:02:54.688    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:02:54.688    CXX test/cpp_headers/blob_bdev.o
00:02:54.688    CXX test/cpp_headers/blobfs_bdev.o
00:02:54.688    CC test/nvme/doorbell_aers/doorbell_aers.o
00:02:54.688    CXX test/cpp_headers/blobfs.o
00:02:54.688    CXX test/cpp_headers/blob.o
00:02:54.688    CC test/nvme/fdp/fdp.o
00:02:54.688    CXX test/cpp_headers/conf.o
00:02:54.688    LINK spdk_trace
00:02:54.688    CXX test/cpp_headers/config.o
00:02:54.688    CXX test/cpp_headers/cpuset.o
00:02:54.688    CXX test/cpp_headers/crc16.o
00:02:54.688    CC test/nvme/cuse/cuse.o
00:02:54.688    CXX test/cpp_headers/crc32.o
00:02:54.688    LINK test_dma
00:02:54.688    CXX test/cpp_headers/crc64.o
00:02:54.688    CXX test/cpp_headers/dif.o
00:02:54.688    CXX test/cpp_headers/dma.o
00:02:54.688    CXX test/cpp_headers/endian.o
00:02:54.688    LINK dif
00:02:54.688    CXX test/cpp_headers/env_dpdk.o
00:02:54.688    CXX test/cpp_headers/env.o
00:02:54.688    CXX test/cpp_headers/event.o
00:02:54.688    LINK spdk_nvme
00:02:54.951    CXX test/cpp_headers/fd_group.o
00:02:54.951    CXX test/cpp_headers/fd.o
00:02:54.951    CXX test/cpp_headers/file.o
00:02:54.951    LINK nvme_manage
00:02:54.951    CXX test/cpp_headers/ftl.o
00:02:54.951    CXX test/cpp_headers/gpt_spec.o
00:02:54.951    CXX test/cpp_headers/hexlify.o
00:02:54.951    CXX test/cpp_headers/idxd.o
00:02:54.951    CXX test/cpp_headers/histogram_data.o
00:02:54.951    LINK nvme_fuzz
00:02:54.951    CXX test/cpp_headers/idxd_spec.o
00:02:54.951    CXX test/cpp_headers/init.o
00:02:54.951    CXX test/cpp_headers/ioat.o
00:02:54.951    LINK blobcli
00:02:54.951    CXX test/cpp_headers/ioat_spec.o
00:02:54.951    LINK spdk_bdev
00:02:54.951    CXX test/cpp_headers/iscsi_spec.o
00:02:54.951    CXX test/cpp_headers/json.o
00:02:54.951    CXX test/cpp_headers/jsonrpc.o
00:02:54.951    LINK bdevio
00:02:54.951    CXX test/cpp_headers/likely.o
00:02:54.951    CXX test/cpp_headers/log.o
00:02:54.951    LINK fused_ordering
00:02:54.951    LINK spdk_nvme_perf
00:02:54.951    CXX test/cpp_headers/lvol.o
00:02:54.951    CXX test/cpp_headers/memory.o
00:02:54.951    CXX test/cpp_headers/mmio.o
00:02:55.213    CXX test/cpp_headers/nbd.o
00:02:55.213    CXX test/cpp_headers/notify.o
00:02:55.213    LINK doorbell_aers
00:02:55.213    CXX test/cpp_headers/nvme.o
00:02:55.213    CXX test/cpp_headers/nvme_intel.o
00:02:55.213    LINK nvmf
00:02:55.213    CXX test/cpp_headers/nvme_ocssd.o
00:02:55.213    CXX test/cpp_headers/nvme_ocssd_spec.o
00:02:55.213    CXX test/cpp_headers/nvme_spec.o
00:02:55.213    CXX test/cpp_headers/nvme_zns.o
00:02:55.213    LINK mem_callbacks
00:02:55.213    CXX test/cpp_headers/nvmf_cmd.o
00:02:55.213    CXX test/cpp_headers/nvmf_fc_spec.o
00:02:55.213    CXX test/cpp_headers/nvmf.o
00:02:55.213    CXX test/cpp_headers/nvmf_spec.o
00:02:55.213    CXX test/cpp_headers/nvmf_transport.o
00:02:55.213    CXX test/cpp_headers/opal.o
00:02:55.213    CXX test/cpp_headers/opal_spec.o
00:02:55.213    CXX test/cpp_headers/pci_ids.o
00:02:55.213    CXX test/cpp_headers/pipe.o
00:02:55.213    CXX test/cpp_headers/queue.o
00:02:55.213    CXX test/cpp_headers/reduce.o
00:02:55.213    CXX test/cpp_headers/rpc.o
00:02:55.213    LINK spdk_nvme_identify
00:02:55.213    LINK fdp
00:02:55.213    CXX test/cpp_headers/scheduler.o
00:02:55.213    CXX test/cpp_headers/scsi.o
00:02:55.475    CXX test/cpp_headers/sock.o
00:02:55.475    CXX test/cpp_headers/stdinc.o
00:02:55.475    CXX test/cpp_headers/scsi_spec.o
00:02:55.475    CXX test/cpp_headers/string.o
00:02:55.475    CXX test/cpp_headers/thread.o
00:02:55.475    CXX test/cpp_headers/trace.o
00:02:55.475    CXX test/cpp_headers/trace_parser.o
00:02:55.475    LINK bdevperf
00:02:55.475    LINK spdk_top
00:02:55.475    CXX test/cpp_headers/ublk.o
00:02:55.475    CXX test/cpp_headers/tree.o
00:02:55.475    CXX test/cpp_headers/util.o
00:02:55.475    CXX test/cpp_headers/uuid.o
00:02:55.475    CXX test/cpp_headers/version.o
00:02:55.475    LINK vhost_fuzz
00:02:55.475    CXX test/cpp_headers/vfio_user_pci.o
00:02:55.475    CXX test/cpp_headers/vhost.o
00:02:55.475    CXX test/cpp_headers/vfio_user_spec.o
00:02:55.475    CXX test/cpp_headers/vmd.o
00:02:55.475    CXX test/cpp_headers/xor.o
00:02:55.475    CXX test/cpp_headers/zipf.o
00:02:55.475    LINK memory_ut
00:02:55.734    LINK accel_perf
00:02:56.304    LINK cuse
00:02:57.244    LINK iscsi_fuzz
00:03:00.539    LINK esnap
00:03:00.799  
00:03:00.799  real	0m56.907s
00:03:00.799  user	8m47.976s
00:03:00.799  sys	3m41.745s
00:03:00.799   10:40:49	-- common/autotest_common.sh@1115 -- $ xtrace_disable
00:03:00.799   10:40:49	-- common/autotest_common.sh@10 -- $ set +x
00:03:00.799  ************************************
00:03:00.799  END TEST make
00:03:00.799  ************************************
00:03:01.059    10:40:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:01.059     10:40:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:01.059     10:40:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:01.059    10:40:49	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:01.059    10:40:49	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:01.059    10:40:49	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:01.059    10:40:49	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:01.059    10:40:49	-- scripts/common.sh@335 -- # IFS=.-:
00:03:01.059    10:40:49	-- scripts/common.sh@335 -- # read -ra ver1
00:03:01.059    10:40:49	-- scripts/common.sh@336 -- # IFS=.-:
00:03:01.059    10:40:49	-- scripts/common.sh@336 -- # read -ra ver2
00:03:01.059    10:40:49	-- scripts/common.sh@337 -- # local 'op=<'
00:03:01.059    10:40:49	-- scripts/common.sh@339 -- # ver1_l=2
00:03:01.059    10:40:49	-- scripts/common.sh@340 -- # ver2_l=1
00:03:01.059    10:40:49	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:01.059    10:40:49	-- scripts/common.sh@343 -- # case "$op" in
00:03:01.059    10:40:49	-- scripts/common.sh@344 -- # : 1
00:03:01.059    10:40:49	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:01.059    10:40:49	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:01.059     10:40:49	-- scripts/common.sh@364 -- # decimal 1
00:03:01.059     10:40:49	-- scripts/common.sh@352 -- # local d=1
00:03:01.059     10:40:49	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:01.059     10:40:49	-- scripts/common.sh@354 -- # echo 1
00:03:01.059    10:40:49	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:01.059     10:40:49	-- scripts/common.sh@365 -- # decimal 2
00:03:01.059     10:40:49	-- scripts/common.sh@352 -- # local d=2
00:03:01.059     10:40:49	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:01.059     10:40:49	-- scripts/common.sh@354 -- # echo 2
00:03:01.059    10:40:49	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:01.059    10:40:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:01.059    10:40:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:01.059    10:40:49	-- scripts/common.sh@367 -- # return 0
00:03:01.059    10:40:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:01.059    10:40:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:01.059  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:01.059  		--rc genhtml_branch_coverage=1
00:03:01.059  		--rc genhtml_function_coverage=1
00:03:01.059  		--rc genhtml_legend=1
00:03:01.059  		--rc geninfo_all_blocks=1
00:03:01.059  		--rc geninfo_unexecuted_blocks=1
00:03:01.059  		
00:03:01.059  		'
00:03:01.059    10:40:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:01.059  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:01.059  		--rc genhtml_branch_coverage=1
00:03:01.059  		--rc genhtml_function_coverage=1
00:03:01.059  		--rc genhtml_legend=1
00:03:01.059  		--rc geninfo_all_blocks=1
00:03:01.059  		--rc geninfo_unexecuted_blocks=1
00:03:01.059  		
00:03:01.059  		'
00:03:01.059    10:40:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:01.059  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:01.059  		--rc genhtml_branch_coverage=1
00:03:01.059  		--rc genhtml_function_coverage=1
00:03:01.059  		--rc genhtml_legend=1
00:03:01.059  		--rc geninfo_all_blocks=1
00:03:01.059  		--rc geninfo_unexecuted_blocks=1
00:03:01.059  		
00:03:01.059  		'
00:03:01.059    10:40:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:01.059  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:01.059  		--rc genhtml_branch_coverage=1
00:03:01.059  		--rc genhtml_function_coverage=1
00:03:01.059  		--rc genhtml_legend=1
00:03:01.059  		--rc geninfo_all_blocks=1
00:03:01.059  		--rc geninfo_unexecuted_blocks=1
00:03:01.059  		
00:03:01.059  		'
00:03:01.059   10:40:49	-- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:03:01.059     10:40:49	-- nvmf/common.sh@7 -- # uname -s
00:03:01.059    10:40:49	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:03:01.059    10:40:49	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:03:01.059    10:40:49	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:03:01.059    10:40:49	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:03:01.059    10:40:49	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:03:01.059    10:40:49	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:03:01.059    10:40:49	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:03:01.059    10:40:49	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:03:01.059    10:40:49	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:03:01.059     10:40:49	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:03:01.059    10:40:49	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:03:01.059    10:40:49	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:03:01.059    10:40:49	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:03:01.059    10:40:49	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:03:01.059    10:40:49	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:03:01.059    10:40:49	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:03:01.059     10:40:49	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:03:01.059     10:40:49	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:01.059     10:40:49	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:01.059      10:40:49	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:01.059      10:40:49	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:01.059      10:40:49	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:01.059      10:40:49	-- paths/export.sh@5 -- # export PATH
00:03:01.059      10:40:49	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:03:01.059    10:40:49	-- nvmf/common.sh@46 -- # : 0
00:03:01.059    10:40:49	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:03:01.059    10:40:49	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:03:01.059    10:40:49	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:03:01.059    10:40:49	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:03:01.059    10:40:49	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:03:01.059    10:40:49	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:03:01.059    10:40:49	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:03:01.059    10:40:49	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:03:01.059   10:40:50	-- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:03:01.059    10:40:50	-- spdk/autotest.sh@32 -- # uname -s
00:03:01.059   10:40:50	-- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:03:01.059   10:40:50	-- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:03:01.059   10:40:50	-- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps
00:03:01.059   10:40:50	-- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:03:01.059   10:40:50	-- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/coredumps
00:03:01.059   10:40:50	-- spdk/autotest.sh@44 -- # modprobe nbd
00:03:01.060    10:40:50	-- spdk/autotest.sh@46 -- # type -P udevadm
00:03:01.060   10:40:50	-- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:03:01.060   10:40:50	-- spdk/autotest.sh@48 -- # udevadm_pid=2033677
00:03:01.060   10:40:50	-- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:03:01.060   10:40:50	-- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:03:01.060   10:40:50	-- spdk/autotest.sh@54 -- # echo 2033679
00:03:01.060   10:40:50	-- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:03:01.060   10:40:50	-- spdk/autotest.sh@56 -- # echo 2033680
00:03:01.060   10:40:50	-- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power
00:03:01.060   10:40:50	-- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]]
00:03:01.060   10:40:50	-- spdk/autotest.sh@60 -- # echo 2033681
00:03:01.060   10:40:50	-- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l
00:03:01.060   10:40:50	-- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power -l
00:03:01.060   10:40:50	-- spdk/autotest.sh@62 -- # echo 2033682
00:03:01.060   10:40:50	-- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:03:01.060   10:40:50	-- spdk/autotest.sh@68 -- # timing_enter autotest
00:03:01.060   10:40:50	-- common/autotest_common.sh@722 -- # xtrace_disable
00:03:01.060   10:40:50	-- common/autotest_common.sh@10 -- # set +x
00:03:01.060   10:40:50	-- spdk/autotest.sh@70 -- # create_test_list
00:03:01.060   10:40:50	-- common/autotest_common.sh@746 -- # xtrace_disable
00:03:01.060   10:40:50	-- common/autotest_common.sh@10 -- # set +x
00:03:01.060  Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log
00:03:01.319  Redirecting to /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log
00:03:01.319     10:40:50	-- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/autotest.sh
00:03:01.319    10:40:50	-- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk
00:03:01.319   10:40:50	-- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:03:01.319   10:40:50	-- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:03:01.319   10:40:50	-- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:03:01.319   10:40:50	-- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod
00:03:01.319    10:40:50	-- common/autotest_common.sh@1450 -- # uname
00:03:01.319   10:40:50	-- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']'
00:03:01.319   10:40:50	-- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf
00:03:01.319    10:40:50	-- common/autotest_common.sh@1470 -- # uname
00:03:01.319   10:40:50	-- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]]
00:03:01.319   10:40:50	-- spdk/autotest.sh@79 -- # [[ y == y ]]
00:03:01.319   10:40:50	-- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:03:01.319  lcov: LCOV version 1.15
00:03:01.319   10:40:50	-- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvme-phy-autotest/spdk -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info
00:03:04.651  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found
00:03:04.651  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno
00:03:04.651  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found
00:03:04.651  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno
00:03:04.651  /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found
00:03:04.651  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno
00:03:37.016   10:41:22	-- spdk/autotest.sh@87 -- # timing_enter pre_cleanup
00:03:37.016   10:41:22	-- common/autotest_common.sh@722 -- # xtrace_disable
00:03:37.016   10:41:22	-- common/autotest_common.sh@10 -- # set +x
00:03:37.016   10:41:22	-- spdk/autotest.sh@89 -- # rm -f
00:03:37.016   10:41:22	-- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:03:37.016  0000:5e:00.0 (8086 0a54): Already using the nvme driver
00:03:37.016  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:03:37.016  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:03:37.276  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:03:37.276   10:41:26	-- spdk/autotest.sh@94 -- # get_zoned_devs
00:03:37.276   10:41:26	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:03:37.276   10:41:26	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:03:37.276   10:41:26	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:03:37.276   10:41:26	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:03:37.276   10:41:26	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:03:37.276   10:41:26	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:03:37.276   10:41:26	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:37.276   10:41:26	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:03:37.276   10:41:26	-- spdk/autotest.sh@96 -- # (( 0 > 0 ))
00:03:37.276    10:41:26	-- spdk/autotest.sh@108 -- # ls /dev/nvme0n1
00:03:37.276    10:41:26	-- spdk/autotest.sh@108 -- # grep -v p
00:03:37.276   10:41:26	-- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true)
00:03:37.276   10:41:26	-- spdk/autotest.sh@110 -- # [[ -z '' ]]
00:03:37.276   10:41:26	-- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1
00:03:37.276   10:41:26	-- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt
00:03:37.276   10:41:26	-- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:03:37.535  No valid GPT data, bailing
00:03:37.535    10:41:26	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:03:37.535   10:41:26	-- scripts/common.sh@393 -- # pt=
00:03:37.535   10:41:26	-- scripts/common.sh@394 -- # return 1
00:03:37.535   10:41:26	-- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:03:37.535  1+0 records in
00:03:37.535  1+0 records out
00:03:37.535  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00470523 s, 223 MB/s
00:03:37.535   10:41:26	-- spdk/autotest.sh@116 -- # sync
00:03:37.535   10:41:26	-- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:37.535   10:41:26	-- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:37.535    10:41:26	-- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:42.808    10:41:31	-- spdk/autotest.sh@122 -- # uname -s
00:03:42.808   10:41:31	-- spdk/autotest.sh@122 -- # '[' Linux = Linux ']'
00:03:42.808   10:41:31	-- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh
00:03:42.808   10:41:31	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:42.808   10:41:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:42.808   10:41:31	-- common/autotest_common.sh@10 -- # set +x
00:03:42.808  ************************************
00:03:42.808  START TEST setup.sh
00:03:42.808  ************************************
00:03:42.808   10:41:31	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/test-setup.sh
00:03:42.808  * Looking for test storage...
00:03:42.808  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:03:42.808     10:41:31	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:42.808      10:41:31	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:42.808      10:41:31	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:42.808     10:41:31	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:42.808     10:41:31	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:42.808     10:41:31	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:42.809     10:41:31	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:42.809     10:41:31	-- scripts/common.sh@335 -- # IFS=.-:
00:03:42.809     10:41:31	-- scripts/common.sh@335 -- # read -ra ver1
00:03:42.809     10:41:31	-- scripts/common.sh@336 -- # IFS=.-:
00:03:42.809     10:41:31	-- scripts/common.sh@336 -- # read -ra ver2
00:03:42.809     10:41:31	-- scripts/common.sh@337 -- # local 'op=<'
00:03:42.809     10:41:31	-- scripts/common.sh@339 -- # ver1_l=2
00:03:42.809     10:41:31	-- scripts/common.sh@340 -- # ver2_l=1
00:03:42.809     10:41:31	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:42.809     10:41:31	-- scripts/common.sh@343 -- # case "$op" in
00:03:42.809     10:41:31	-- scripts/common.sh@344 -- # : 1
00:03:42.809     10:41:31	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:42.809     10:41:31	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:42.809      10:41:31	-- scripts/common.sh@364 -- # decimal 1
00:03:42.809      10:41:31	-- scripts/common.sh@352 -- # local d=1
00:03:42.809      10:41:31	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:42.809      10:41:31	-- scripts/common.sh@354 -- # echo 1
00:03:42.809     10:41:31	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:42.809      10:41:31	-- scripts/common.sh@365 -- # decimal 2
00:03:42.809      10:41:31	-- scripts/common.sh@352 -- # local d=2
00:03:42.809      10:41:31	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:42.809      10:41:31	-- scripts/common.sh@354 -- # echo 2
00:03:42.809     10:41:31	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:42.809     10:41:31	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:42.809     10:41:31	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:42.809     10:41:31	-- scripts/common.sh@367 -- # return 0
00:03:42.809     10:41:31	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:42.809     10:41:31	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:42.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:42.809  		--rc genhtml_branch_coverage=1
00:03:42.809  		--rc genhtml_function_coverage=1
00:03:42.809  		--rc genhtml_legend=1
00:03:42.809  		--rc geninfo_all_blocks=1
00:03:42.809  		--rc geninfo_unexecuted_blocks=1
00:03:42.809  		
00:03:42.809  		'
00:03:42.809     10:41:31	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:42.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:42.809  		--rc genhtml_branch_coverage=1
00:03:42.809  		--rc genhtml_function_coverage=1
00:03:42.809  		--rc genhtml_legend=1
00:03:42.809  		--rc geninfo_all_blocks=1
00:03:42.809  		--rc geninfo_unexecuted_blocks=1
00:03:42.809  		
00:03:42.809  		'
00:03:42.809     10:41:31	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:42.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:42.809  		--rc genhtml_branch_coverage=1
00:03:42.809  		--rc genhtml_function_coverage=1
00:03:42.809  		--rc genhtml_legend=1
00:03:42.809  		--rc geninfo_all_blocks=1
00:03:42.809  		--rc geninfo_unexecuted_blocks=1
00:03:42.809  		
00:03:42.809  		'
00:03:42.809     10:41:31	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:42.809  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:42.809  		--rc genhtml_branch_coverage=1
00:03:42.809  		--rc genhtml_function_coverage=1
00:03:42.809  		--rc genhtml_legend=1
00:03:42.809  		--rc geninfo_all_blocks=1
00:03:42.809  		--rc geninfo_unexecuted_blocks=1
00:03:42.809  		
00:03:42.809  		'
00:03:42.809    10:41:31	-- setup/test-setup.sh@10 -- # uname -s
00:03:42.809   10:41:31	-- setup/test-setup.sh@10 -- # [[ Linux == Linux ]]
00:03:42.809   10:41:31	-- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh
00:03:42.809   10:41:31	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:42.809   10:41:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:42.809   10:41:31	-- common/autotest_common.sh@10 -- # set +x
00:03:42.809  ************************************
00:03:42.809  START TEST acl
00:03:42.809  ************************************
00:03:42.809   10:41:31	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/acl.sh
00:03:42.809  * Looking for test storage...
00:03:42.809  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:03:42.809     10:41:31	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:03:42.809      10:41:31	-- common/autotest_common.sh@1690 -- # lcov --version
00:03:42.809      10:41:31	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:03:43.068     10:41:31	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:03:43.068     10:41:31	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:03:43.068     10:41:31	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:03:43.068     10:41:31	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:03:43.068     10:41:31	-- scripts/common.sh@335 -- # IFS=.-:
00:03:43.069     10:41:31	-- scripts/common.sh@335 -- # read -ra ver1
00:03:43.069     10:41:31	-- scripts/common.sh@336 -- # IFS=.-:
00:03:43.069     10:41:31	-- scripts/common.sh@336 -- # read -ra ver2
00:03:43.069     10:41:31	-- scripts/common.sh@337 -- # local 'op=<'
00:03:43.069     10:41:31	-- scripts/common.sh@339 -- # ver1_l=2
00:03:43.069     10:41:31	-- scripts/common.sh@340 -- # ver2_l=1
00:03:43.069     10:41:31	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:03:43.069     10:41:31	-- scripts/common.sh@343 -- # case "$op" in
00:03:43.069     10:41:31	-- scripts/common.sh@344 -- # : 1
00:03:43.069     10:41:31	-- scripts/common.sh@363 -- # (( v = 0 ))
00:03:43.069     10:41:31	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:43.069      10:41:31	-- scripts/common.sh@364 -- # decimal 1
00:03:43.069      10:41:31	-- scripts/common.sh@352 -- # local d=1
00:03:43.069      10:41:31	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:43.069      10:41:31	-- scripts/common.sh@354 -- # echo 1
00:03:43.069     10:41:31	-- scripts/common.sh@364 -- # ver1[v]=1
00:03:43.069      10:41:31	-- scripts/common.sh@365 -- # decimal 2
00:03:43.069      10:41:31	-- scripts/common.sh@352 -- # local d=2
00:03:43.069      10:41:31	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:43.069      10:41:31	-- scripts/common.sh@354 -- # echo 2
00:03:43.069     10:41:31	-- scripts/common.sh@365 -- # ver2[v]=2
00:03:43.069     10:41:31	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:03:43.069     10:41:31	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:03:43.069     10:41:31	-- scripts/common.sh@367 -- # return 0
00:03:43.069     10:41:31	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:43.069     10:41:31	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:03:43.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:43.069  		--rc genhtml_branch_coverage=1
00:03:43.069  		--rc genhtml_function_coverage=1
00:03:43.069  		--rc genhtml_legend=1
00:03:43.069  		--rc geninfo_all_blocks=1
00:03:43.069  		--rc geninfo_unexecuted_blocks=1
00:03:43.069  		
00:03:43.069  		'
00:03:43.069     10:41:31	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:03:43.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:43.069  		--rc genhtml_branch_coverage=1
00:03:43.069  		--rc genhtml_function_coverage=1
00:03:43.069  		--rc genhtml_legend=1
00:03:43.069  		--rc geninfo_all_blocks=1
00:03:43.069  		--rc geninfo_unexecuted_blocks=1
00:03:43.069  		
00:03:43.069  		'
00:03:43.069     10:41:31	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:03:43.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:43.069  		--rc genhtml_branch_coverage=1
00:03:43.069  		--rc genhtml_function_coverage=1
00:03:43.069  		--rc genhtml_legend=1
00:03:43.069  		--rc geninfo_all_blocks=1
00:03:43.069  		--rc geninfo_unexecuted_blocks=1
00:03:43.069  		
00:03:43.069  		'
00:03:43.069     10:41:31	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:03:43.069  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:43.069  		--rc genhtml_branch_coverage=1
00:03:43.069  		--rc genhtml_function_coverage=1
00:03:43.069  		--rc genhtml_legend=1
00:03:43.069  		--rc geninfo_all_blocks=1
00:03:43.069  		--rc geninfo_unexecuted_blocks=1
00:03:43.069  		
00:03:43.069  		'
00:03:43.069   10:41:31	-- setup/acl.sh@10 -- # get_zoned_devs
00:03:43.069   10:41:31	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:03:43.069   10:41:31	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:03:43.069   10:41:31	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:03:43.069   10:41:31	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:03:43.069   10:41:31	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:03:43.069   10:41:31	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:03:43.069   10:41:31	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:03:43.069   10:41:31	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:03:43.069   10:41:31	-- setup/acl.sh@12 -- # devs=()
00:03:43.069   10:41:31	-- setup/acl.sh@12 -- # declare -a devs
00:03:43.069   10:41:31	-- setup/acl.sh@13 -- # drivers=()
00:03:43.069   10:41:31	-- setup/acl.sh@13 -- # declare -A drivers
00:03:43.069   10:41:31	-- setup/acl.sh@51 -- # setup reset
00:03:43.069   10:41:31	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:43.069   10:41:31	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:03:47.264   10:41:35	-- setup/acl.sh@52 -- # collect_setup_devs
00:03:47.264   10:41:35	-- setup/acl.sh@16 -- # local dev driver
00:03:47.264   10:41:35	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:47.264    10:41:35	-- setup/acl.sh@15 -- # setup output status
00:03:47.264    10:41:35	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:47.264    10:41:35	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:03:49.800  Hugepages
00:03:49.800  node     hugesize     free /  total
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800  
00:03:49.800  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.800   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.800   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]]
00:03:49.800   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ nvme == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@22 -- # devs+=("$dev")
00:03:49.801   10:41:38	-- setup/acl.sh@22 -- # drivers["$dev"]=nvme
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:49.801   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:49.801   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:49.801   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:50.059   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]]
00:03:50.059   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:50.059   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:50.059   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:50.059   10:41:38	-- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]]
00:03:50.059   10:41:38	-- setup/acl.sh@20 -- # [[ ioatdma == nvme ]]
00:03:50.059   10:41:38	-- setup/acl.sh@20 -- # continue
00:03:50.059   10:41:38	-- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _
00:03:50.059   10:41:38	-- setup/acl.sh@24 -- # (( 1 > 0 ))
00:03:50.059   10:41:38	-- setup/acl.sh@54 -- # run_test denied denied
00:03:50.059   10:41:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:50.059   10:41:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:50.059   10:41:38	-- common/autotest_common.sh@10 -- # set +x
00:03:50.059  ************************************
00:03:50.059  START TEST denied
00:03:50.059  ************************************
00:03:50.059   10:41:38	-- common/autotest_common.sh@1114 -- # denied
00:03:50.059   10:41:38	-- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0'
00:03:50.059   10:41:38	-- setup/acl.sh@38 -- # setup output config
00:03:50.060   10:41:38	-- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0'
00:03:50.060   10:41:38	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:50.060   10:41:38	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:03:54.251  0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0
00:03:54.251   10:41:42	-- setup/acl.sh@40 -- # verify 0000:5e:00.0
00:03:54.251   10:41:42	-- setup/acl.sh@28 -- # local dev driver
00:03:54.251   10:41:42	-- setup/acl.sh@30 -- # for dev in "$@"
00:03:54.251   10:41:42	-- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]]
00:03:54.251    10:41:42	-- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver
00:03:54.251   10:41:42	-- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme
00:03:54.251   10:41:42	-- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]]
00:03:54.251   10:41:42	-- setup/acl.sh@41 -- # setup reset
00:03:54.251   10:41:42	-- setup/common.sh@9 -- # [[ reset == output ]]
00:03:54.251   10:41:42	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:03:58.444  
00:03:58.444  real	0m8.238s
00:03:58.444  user	0m2.640s
00:03:58.444  sys	0m4.902s
00:03:58.444   10:41:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:03:58.444   10:41:47	-- common/autotest_common.sh@10 -- # set +x
00:03:58.444  ************************************
00:03:58.444  END TEST denied
00:03:58.444  ************************************
00:03:58.444   10:41:47	-- setup/acl.sh@55 -- # run_test allowed allowed
00:03:58.444   10:41:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:03:58.444   10:41:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:03:58.444   10:41:47	-- common/autotest_common.sh@10 -- # set +x
00:03:58.444  ************************************
00:03:58.444  START TEST allowed
00:03:58.444  ************************************
00:03:58.444   10:41:47	-- common/autotest_common.sh@1114 -- # allowed
00:03:58.444   10:41:47	-- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0
00:03:58.444   10:41:47	-- setup/acl.sh@45 -- # setup output config
00:03:58.444   10:41:47	-- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*'
00:03:58.444   10:41:47	-- setup/common.sh@9 -- # [[ output == output ]]
00:03:58.444   10:41:47	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:04:05.015  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:04:05.015   10:41:53	-- setup/acl.sh@47 -- # verify
00:04:05.015   10:41:53	-- setup/acl.sh@28 -- # local dev driver
00:04:05.015   10:41:53	-- setup/acl.sh@48 -- # setup reset
00:04:05.015   10:41:53	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:05.015   10:41:53	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:09.209  
00:04:09.209  real	0m10.406s
00:04:09.209  user	0m2.420s
00:04:09.209  sys	0m4.903s
00:04:09.209   10:41:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:09.209   10:41:57	-- common/autotest_common.sh@10 -- # set +x
00:04:09.209  ************************************
00:04:09.209  END TEST allowed
00:04:09.209  ************************************
00:04:09.209  
00:04:09.209  real	0m25.984s
00:04:09.209  user	0m7.736s
00:04:09.209  sys	0m14.766s
00:04:09.209   10:41:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:09.209   10:41:57	-- common/autotest_common.sh@10 -- # set +x
00:04:09.209  ************************************
00:04:09.209  END TEST acl
00:04:09.209  ************************************
00:04:09.209   10:41:57	-- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh
00:04:09.209   10:41:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:09.209   10:41:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:09.209   10:41:57	-- common/autotest_common.sh@10 -- # set +x
00:04:09.209  ************************************
00:04:09.209  START TEST hugepages
00:04:09.209  ************************************
00:04:09.209   10:41:57	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/hugepages.sh
00:04:09.209  * Looking for test storage...
00:04:09.209  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:04:09.209     10:41:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:09.209      10:41:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:09.209      10:41:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:09.209     10:41:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:09.209     10:41:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:09.209     10:41:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:09.209     10:41:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:09.209     10:41:57	-- scripts/common.sh@335 -- # IFS=.-:
00:04:09.209     10:41:57	-- scripts/common.sh@335 -- # read -ra ver1
00:04:09.209     10:41:57	-- scripts/common.sh@336 -- # IFS=.-:
00:04:09.209     10:41:57	-- scripts/common.sh@336 -- # read -ra ver2
00:04:09.209     10:41:57	-- scripts/common.sh@337 -- # local 'op=<'
00:04:09.209     10:41:57	-- scripts/common.sh@339 -- # ver1_l=2
00:04:09.209     10:41:57	-- scripts/common.sh@340 -- # ver2_l=1
00:04:09.209     10:41:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:09.209     10:41:57	-- scripts/common.sh@343 -- # case "$op" in
00:04:09.209     10:41:57	-- scripts/common.sh@344 -- # : 1
00:04:09.209     10:41:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:09.209     10:41:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:09.209      10:41:57	-- scripts/common.sh@364 -- # decimal 1
00:04:09.209      10:41:57	-- scripts/common.sh@352 -- # local d=1
00:04:09.209      10:41:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:09.209      10:41:57	-- scripts/common.sh@354 -- # echo 1
00:04:09.209     10:41:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:09.209      10:41:57	-- scripts/common.sh@365 -- # decimal 2
00:04:09.209      10:41:57	-- scripts/common.sh@352 -- # local d=2
00:04:09.209      10:41:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:09.209      10:41:57	-- scripts/common.sh@354 -- # echo 2
00:04:09.209     10:41:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:09.209     10:41:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:09.209     10:41:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:09.209     10:41:57	-- scripts/common.sh@367 -- # return 0
00:04:09.209     10:41:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:09.209     10:41:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:09.209  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.209  		--rc genhtml_branch_coverage=1
00:04:09.209  		--rc genhtml_function_coverage=1
00:04:09.209  		--rc genhtml_legend=1
00:04:09.209  		--rc geninfo_all_blocks=1
00:04:09.209  		--rc geninfo_unexecuted_blocks=1
00:04:09.209  		
00:04:09.209  		'
00:04:09.209     10:41:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:09.209  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.209  		--rc genhtml_branch_coverage=1
00:04:09.209  		--rc genhtml_function_coverage=1
00:04:09.209  		--rc genhtml_legend=1
00:04:09.209  		--rc geninfo_all_blocks=1
00:04:09.209  		--rc geninfo_unexecuted_blocks=1
00:04:09.209  		
00:04:09.209  		'
00:04:09.209     10:41:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:09.209  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.209  		--rc genhtml_branch_coverage=1
00:04:09.209  		--rc genhtml_function_coverage=1
00:04:09.209  		--rc genhtml_legend=1
00:04:09.209  		--rc geninfo_all_blocks=1
00:04:09.209  		--rc geninfo_unexecuted_blocks=1
00:04:09.209  		
00:04:09.209  		'
00:04:09.209     10:41:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:09.209  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:09.209  		--rc genhtml_branch_coverage=1
00:04:09.209  		--rc genhtml_function_coverage=1
00:04:09.209  		--rc genhtml_legend=1
00:04:09.209  		--rc geninfo_all_blocks=1
00:04:09.209  		--rc geninfo_unexecuted_blocks=1
00:04:09.209  		
00:04:09.209  		'
00:04:09.209   10:41:57	-- setup/hugepages.sh@10 -- # nodes_sys=()
00:04:09.209   10:41:57	-- setup/hugepages.sh@10 -- # declare -a nodes_sys
00:04:09.209   10:41:57	-- setup/hugepages.sh@12 -- # declare -i default_hugepages=0
00:04:09.209   10:41:57	-- setup/hugepages.sh@13 -- # declare -i no_nodes=0
00:04:09.209   10:41:57	-- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0
00:04:09.209    10:41:57	-- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize
00:04:09.209    10:41:57	-- setup/common.sh@17 -- # local get=Hugepagesize
00:04:09.209    10:41:57	-- setup/common.sh@18 -- # local node=
00:04:09.209    10:41:57	-- setup/common.sh@19 -- # local var val
00:04:09.209    10:41:57	-- setup/common.sh@20 -- # local mem_f mem
00:04:09.209    10:41:57	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:09.209    10:41:57	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:09.209    10:41:57	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:09.209    10:41:57	-- setup/common.sh@28 -- # mapfile -t mem
00:04:09.209    10:41:57	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.209     10:41:57	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        76475600 kB' 'MemAvailable:   79996028 kB' 'Buffers:            8064 kB' 'Cached:          9620252 kB' 'SwapCached:            0 kB' 'Active:          6414108 kB' 'Inactive:        3691500 kB' 'Active(anon):    6021376 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        480588 kB' 'Mapped:           159776 kB' 'Shmem:           5544084 kB' 'KReclaimable:     179152 kB' 'Slab:             636344 kB' 'SReclaimable:     179152 kB' 'SUnreclaim:       457192 kB' 'KernelStack:       16224 kB' 'PageTables:         8104 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52434168 kB' 'Committed_AS:    7219256 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199096 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    2048' 'HugePages_Free:     2048' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         4194304 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.209    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.209    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.210    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.210    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # continue
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # IFS=': '
00:04:09.211    10:41:57	-- setup/common.sh@31 -- # read -r var val _
00:04:09.211    10:41:57	-- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]]
00:04:09.211    10:41:57	-- setup/common.sh@33 -- # echo 2048
00:04:09.211    10:41:57	-- setup/common.sh@33 -- # return 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@16 -- # default_hugepages=2048
00:04:09.211   10:41:57	-- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
00:04:09.211   10:41:57	-- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages
00:04:09.211   10:41:57	-- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC
00:04:09.211   10:41:57	-- setup/hugepages.sh@22 -- # unset -v HUGEMEM
00:04:09.211   10:41:57	-- setup/hugepages.sh@23 -- # unset -v HUGENODE
00:04:09.211   10:41:57	-- setup/hugepages.sh@24 -- # unset -v NRHUGE
00:04:09.211   10:41:57	-- setup/hugepages.sh@207 -- # get_nodes
00:04:09.211   10:41:57	-- setup/hugepages.sh@27 -- # local node
00:04:09.211   10:41:57	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:09.211   10:41:57	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048
00:04:09.211   10:41:57	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:09.211   10:41:57	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:04:09.211   10:41:57	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:09.211   10:41:57	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:09.211   10:41:57	-- setup/hugepages.sh@208 -- # clear_hp
00:04:09.211   10:41:57	-- setup/hugepages.sh@37 -- # local node hp
00:04:09.211   10:41:57	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:09.211   10:41:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:09.211   10:41:57	-- setup/hugepages.sh@41 -- # echo 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:09.211   10:41:57	-- setup/hugepages.sh@41 -- # echo 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:09.211   10:41:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:09.211   10:41:57	-- setup/hugepages.sh@41 -- # echo 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:09.211   10:41:57	-- setup/hugepages.sh@41 -- # echo 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:04:09.211   10:41:57	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:04:09.211   10:41:57	-- setup/hugepages.sh@210 -- # run_test default_setup default_setup
00:04:09.211   10:41:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:09.211   10:41:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:09.211   10:41:57	-- common/autotest_common.sh@10 -- # set +x
00:04:09.211  ************************************
00:04:09.211  START TEST default_setup
00:04:09.211  ************************************
00:04:09.211   10:41:57	-- common/autotest_common.sh@1114 -- # default_setup
00:04:09.211   10:41:57	-- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:09.211   10:41:57	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:04:09.211   10:41:57	-- setup/hugepages.sh@51 -- # shift
00:04:09.211   10:41:57	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:04:09.211   10:41:57	-- setup/hugepages.sh@52 -- # local node_ids
00:04:09.211   10:41:57	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:09.211   10:41:57	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:09.211   10:41:57	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:04:09.211   10:41:57	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:09.211   10:41:57	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:09.211   10:41:57	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:09.211   10:41:57	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:09.211   10:41:57	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:09.211   10:41:57	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:04:09.211   10:41:57	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:09.211   10:41:57	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:04:09.211   10:41:57	-- setup/hugepages.sh@73 -- # return 0
00:04:09.211   10:41:57	-- setup/hugepages.sh@137 -- # setup output
00:04:09.211   10:41:57	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:09.211   10:41:57	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:12.498  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:04:12.498  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:04:15.791  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:04:15.791   10:42:04	-- setup/hugepages.sh@138 -- # verify_nr_hugepages
00:04:15.791   10:42:04	-- setup/hugepages.sh@89 -- # local node
00:04:15.791   10:42:04	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:15.791   10:42:04	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:15.791   10:42:04	-- setup/hugepages.sh@92 -- # local surp
00:04:15.791   10:42:04	-- setup/hugepages.sh@93 -- # local resv
00:04:15.791   10:42:04	-- setup/hugepages.sh@94 -- # local anon
00:04:15.791   10:42:04	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:15.791    10:42:04	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:15.791    10:42:04	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:15.791    10:42:04	-- setup/common.sh@18 -- # local node=
00:04:15.791    10:42:04	-- setup/common.sh@19 -- # local var val
00:04:15.791    10:42:04	-- setup/common.sh@20 -- # local mem_f mem
00:04:15.791    10:42:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:15.791    10:42:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:15.791    10:42:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:15.791    10:42:04	-- setup/common.sh@28 -- # mapfile -t mem
00:04:15.791    10:42:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791     10:42:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78605080 kB' 'MemAvailable:   82125420 kB' 'Buffers:            8064 kB' 'Cached:          9620376 kB' 'SwapCached:            0 kB' 'Active:          6415908 kB' 'Inactive:        3691500 kB' 'Active(anon):    6023176 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        482468 kB' 'Mapped:           159520 kB' 'Shmem:           5544208 kB' 'KReclaimable:     178976 kB' 'Slab:             636176 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       457200 kB' 'KernelStack:       16496 kB' 'PageTables:         8580 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7227904 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199112 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.791    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.791    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:15.792    10:42:04	-- setup/common.sh@33 -- # echo 0
00:04:15.792    10:42:04	-- setup/common.sh@33 -- # return 0
00:04:15.792   10:42:04	-- setup/hugepages.sh@97 -- # anon=0
00:04:15.792    10:42:04	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:15.792    10:42:04	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:15.792    10:42:04	-- setup/common.sh@18 -- # local node=
00:04:15.792    10:42:04	-- setup/common.sh@19 -- # local var val
00:04:15.792    10:42:04	-- setup/common.sh@20 -- # local mem_f mem
00:04:15.792    10:42:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:15.792    10:42:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:15.792    10:42:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:15.792    10:42:04	-- setup/common.sh@28 -- # mapfile -t mem
00:04:15.792    10:42:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792     10:42:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78604176 kB' 'MemAvailable:   82124516 kB' 'Buffers:            8064 kB' 'Cached:          9620376 kB' 'SwapCached:            0 kB' 'Active:          6416664 kB' 'Inactive:        3691500 kB' 'Active(anon):    6023932 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        483492 kB' 'Mapped:           159668 kB' 'Shmem:           5544208 kB' 'KReclaimable:     178976 kB' 'Slab:             636136 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       457160 kB' 'KernelStack:       16288 kB' 'PageTables:         8664 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7227648 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199080 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.792    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.792    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.793    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.793    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.794    10:42:04	-- setup/common.sh@33 -- # echo 0
00:04:15.794    10:42:04	-- setup/common.sh@33 -- # return 0
00:04:15.794   10:42:04	-- setup/hugepages.sh@99 -- # surp=0
00:04:15.794    10:42:04	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:15.794    10:42:04	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:15.794    10:42:04	-- setup/common.sh@18 -- # local node=
00:04:15.794    10:42:04	-- setup/common.sh@19 -- # local var val
00:04:15.794    10:42:04	-- setup/common.sh@20 -- # local mem_f mem
00:04:15.794    10:42:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:15.794    10:42:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:15.794    10:42:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:15.794    10:42:04	-- setup/common.sh@28 -- # mapfile -t mem
00:04:15.794    10:42:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794     10:42:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78596912 kB' 'MemAvailable:   82117252 kB' 'Buffers:            8064 kB' 'Cached:          9620388 kB' 'SwapCached:            0 kB' 'Active:          6420540 kB' 'Inactive:        3691500 kB' 'Active(anon):    6027808 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        486860 kB' 'Mapped:           160080 kB' 'Shmem:           5544220 kB' 'KReclaimable:     178976 kB' 'Slab:             636156 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       457180 kB' 'KernelStack:       16304 kB' 'PageTables:         7964 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7231648 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199100 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.794    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.794    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:15.795    10:42:04	-- setup/common.sh@33 -- # echo 0
00:04:15.795    10:42:04	-- setup/common.sh@33 -- # return 0
00:04:15.795   10:42:04	-- setup/hugepages.sh@100 -- # resv=0
00:04:15.795   10:42:04	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:15.795  nr_hugepages=1024
00:04:15.795   10:42:04	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:15.795  resv_hugepages=0
00:04:15.795   10:42:04	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:15.795  surplus_hugepages=0
00:04:15.795   10:42:04	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:15.795  anon_hugepages=0
00:04:15.795   10:42:04	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:15.795   10:42:04	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:15.795    10:42:04	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:15.795    10:42:04	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:15.795    10:42:04	-- setup/common.sh@18 -- # local node=
00:04:15.795    10:42:04	-- setup/common.sh@19 -- # local var val
00:04:15.795    10:42:04	-- setup/common.sh@20 -- # local mem_f mem
00:04:15.795    10:42:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:15.795    10:42:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:15.795    10:42:04	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:15.795    10:42:04	-- setup/common.sh@28 -- # mapfile -t mem
00:04:15.795    10:42:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795     10:42:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78606952 kB' 'MemAvailable:   82127292 kB' 'Buffers:            8064 kB' 'Cached:          9620404 kB' 'SwapCached:            0 kB' 'Active:          6415300 kB' 'Inactive:        3691500 kB' 'Active(anon):    6022568 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        481996 kB' 'Mapped:           159576 kB' 'Shmem:           5544236 kB' 'KReclaimable:     178976 kB' 'Slab:             636128 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       457152 kB' 'KernelStack:       16240 kB' 'PageTables:         8412 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7229328 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199112 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.795    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.795    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.796    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.796    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:15.797    10:42:04	-- setup/common.sh@33 -- # echo 1024
00:04:15.797    10:42:04	-- setup/common.sh@33 -- # return 0
00:04:15.797   10:42:04	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:15.797   10:42:04	-- setup/hugepages.sh@112 -- # get_nodes
00:04:15.797   10:42:04	-- setup/hugepages.sh@27 -- # local node
00:04:15.797   10:42:04	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:15.797   10:42:04	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:15.797   10:42:04	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:15.797   10:42:04	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:04:15.797   10:42:04	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:15.797   10:42:04	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:15.797   10:42:04	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:15.797   10:42:04	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:15.797    10:42:04	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:15.797    10:42:04	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:15.797    10:42:04	-- setup/common.sh@18 -- # local node=0
00:04:15.797    10:42:04	-- setup/common.sh@19 -- # local var val
00:04:15.797    10:42:04	-- setup/common.sh@20 -- # local mem_f mem
00:04:15.797    10:42:04	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:15.797    10:42:04	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:15.797    10:42:04	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:15.797    10:42:04	-- setup/common.sh@28 -- # mapfile -t mem
00:04:15.797    10:42:04	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797     10:42:04	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43803944 kB' 'MemUsed:         4260904 kB' 'SwapCached:            0 kB' 'Active:          1257480 kB' 'Inactive:         171132 kB' 'Active(anon):    1046928 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267592 kB' 'Mapped:            89216 kB' 'AnonPages:        164184 kB' 'Shmem:            885908 kB' 'KernelStack:        8792 kB' 'PageTables:         3376 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80716 kB' 'Slab:             303236 kB' 'SReclaimable:      80716 kB' 'SUnreclaim:       222520 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.797    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.797    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # continue
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # IFS=': '
00:04:15.798    10:42:04	-- setup/common.sh@31 -- # read -r var val _
00:04:15.798    10:42:04	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:15.798    10:42:04	-- setup/common.sh@33 -- # echo 0
00:04:15.798    10:42:04	-- setup/common.sh@33 -- # return 0
00:04:15.798   10:42:04	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:15.798   10:42:04	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:15.798   10:42:04	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:15.798   10:42:04	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:15.798   10:42:04	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:15.798  node0=1024 expecting 1024
00:04:15.798   10:42:04	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:15.798  
00:04:15.798  real	0m6.759s
00:04:15.798  user	0m1.391s
00:04:15.798  sys	0m2.334s
00:04:15.798   10:42:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:15.798   10:42:04	-- common/autotest_common.sh@10 -- # set +x
00:04:15.798  ************************************
00:04:15.798  END TEST default_setup
00:04:15.798  ************************************
00:04:15.798   10:42:04	-- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc
00:04:15.798   10:42:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:15.798   10:42:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:15.798   10:42:04	-- common/autotest_common.sh@10 -- # set +x
00:04:15.798  ************************************
00:04:15.798  START TEST per_node_1G_alloc
00:04:15.798  ************************************
00:04:15.798   10:42:04	-- common/autotest_common.sh@1114 -- # per_node_1G_alloc
00:04:15.798   10:42:04	-- setup/hugepages.sh@143 -- # local IFS=,
00:04:15.798   10:42:04	-- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1
00:04:15.798   10:42:04	-- setup/hugepages.sh@49 -- # local size=1048576
00:04:15.798   10:42:04	-- setup/hugepages.sh@50 -- # (( 3 > 1 ))
00:04:15.798   10:42:04	-- setup/hugepages.sh@51 -- # shift
00:04:15.798   10:42:04	-- setup/hugepages.sh@52 -- # node_ids=('0' '1')
00:04:15.798   10:42:04	-- setup/hugepages.sh@52 -- # local node_ids
00:04:15.798   10:42:04	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:15.798   10:42:04	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:04:15.798   10:42:04	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1
00:04:15.798   10:42:04	-- setup/hugepages.sh@62 -- # user_nodes=('0' '1')
00:04:15.798   10:42:04	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:15.798   10:42:04	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:04:15.798   10:42:04	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:15.798   10:42:04	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:15.798   10:42:04	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:15.798   10:42:04	-- setup/hugepages.sh@69 -- # (( 2 > 0 ))
00:04:15.798   10:42:04	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:15.798   10:42:04	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:04:15.798   10:42:04	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:15.798   10:42:04	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512
00:04:15.798   10:42:04	-- setup/hugepages.sh@73 -- # return 0
00:04:15.798   10:42:04	-- setup/hugepages.sh@146 -- # NRHUGE=512
00:04:15.798   10:42:04	-- setup/hugepages.sh@146 -- # HUGENODE=0,1
00:04:15.798   10:42:04	-- setup/hugepages.sh@146 -- # setup output
00:04:15.798   10:42:04	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:15.798   10:42:04	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:19.089  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:19.089  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:19.089  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:19.089   10:42:08	-- setup/hugepages.sh@147 -- # nr_hugepages=1024
00:04:19.089   10:42:08	-- setup/hugepages.sh@147 -- # verify_nr_hugepages
00:04:19.089   10:42:08	-- setup/hugepages.sh@89 -- # local node
00:04:19.089   10:42:08	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:19.089   10:42:08	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:19.089   10:42:08	-- setup/hugepages.sh@92 -- # local surp
00:04:19.089   10:42:08	-- setup/hugepages.sh@93 -- # local resv
00:04:19.089   10:42:08	-- setup/hugepages.sh@94 -- # local anon
00:04:19.089   10:42:08	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:19.089    10:42:08	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:19.089    10:42:08	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:19.089    10:42:08	-- setup/common.sh@18 -- # local node=
00:04:19.089    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.089    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.089    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.089    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:19.089    10:42:08	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:19.089    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.089    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.089     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78591188 kB' 'MemAvailable:   82111528 kB' 'Buffers:            8064 kB' 'Cached:          9620476 kB' 'SwapCached:            0 kB' 'Active:          6411724 kB' 'Inactive:        3691500 kB' 'Active(anon):    6018992 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477320 kB' 'Mapped:           158372 kB' 'Shmem:           5544308 kB' 'KReclaimable:     178976 kB' 'Slab:             635616 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       456640 kB' 'KernelStack:       15968 kB' 'PageTables:         7268 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212264 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199064 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.089    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.089    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.090    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.090    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:19.090    10:42:08	-- setup/common.sh@33 -- # echo 0
00:04:19.090    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.090   10:42:08	-- setup/hugepages.sh@97 -- # anon=0
00:04:19.090    10:42:08	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:19.090    10:42:08	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:19.090    10:42:08	-- setup/common.sh@18 -- # local node=
00:04:19.090    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.090    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.090    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.090    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:19.090    10:42:08	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:19.090    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.090    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.091     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78591912 kB' 'MemAvailable:   82112252 kB' 'Buffers:            8064 kB' 'Cached:          9620480 kB' 'SwapCached:            0 kB' 'Active:          6411028 kB' 'Inactive:        3691500 kB' 'Active(anon):    6018296 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477132 kB' 'Mapped:           158284 kB' 'Shmem:           5544312 kB' 'KReclaimable:     178976 kB' 'Slab:             635572 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       456596 kB' 'KernelStack:       15984 kB' 'PageTables:         7312 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212280 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199064 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.091    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.091    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.091    10:42:08	-- setup/common.sh@33 -- # echo 0
00:04:19.091    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.091   10:42:08	-- setup/hugepages.sh@99 -- # surp=0
00:04:19.091    10:42:08	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:19.091    10:42:08	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:19.091    10:42:08	-- setup/common.sh@18 -- # local node=
00:04:19.091    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.091    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.091    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.091    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:19.091    10:42:08	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:19.091    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.091    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78593028 kB' 'MemAvailable:   82113368 kB' 'Buffers:            8064 kB' 'Cached:          9620492 kB' 'SwapCached:            0 kB' 'Active:          6410852 kB' 'Inactive:        3691500 kB' 'Active(anon):    6018120 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        476980 kB' 'Mapped:           158276 kB' 'Shmem:           5544324 kB' 'KReclaimable:     178976 kB' 'Slab:             635580 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       456604 kB' 'KernelStack:       15952 kB' 'PageTables:         7180 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7211924 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.354    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.354    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:19.355    10:42:08	-- setup/common.sh@33 -- # echo 0
00:04:19.355    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.355   10:42:08	-- setup/hugepages.sh@100 -- # resv=0
00:04:19.355   10:42:08	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:19.355  nr_hugepages=1024
00:04:19.355   10:42:08	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:19.355  resv_hugepages=0
00:04:19.355   10:42:08	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:19.355  surplus_hugepages=0
00:04:19.355   10:42:08	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:19.355  anon_hugepages=0
00:04:19.355   10:42:08	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:19.355   10:42:08	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:19.355    10:42:08	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:19.355    10:42:08	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:19.355    10:42:08	-- setup/common.sh@18 -- # local node=
00:04:19.355    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.355    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.355    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.355    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:19.355    10:42:08	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:19.355    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.355    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78593480 kB' 'MemAvailable:   82113820 kB' 'Buffers:            8064 kB' 'Cached:          9620520 kB' 'SwapCached:            0 kB' 'Active:          6410312 kB' 'Inactive:        3691500 kB' 'Active(anon):    6017580 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        476380 kB' 'Mapped:           158276 kB' 'Shmem:           5544352 kB' 'KReclaimable:     178976 kB' 'Slab:             635580 kB' 'SReclaimable:     178976 kB' 'SUnreclaim:       456604 kB' 'KernelStack:       15920 kB' 'PageTables:         7088 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212076 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.355    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.355    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.356    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.356    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:19.356    10:42:08	-- setup/common.sh@33 -- # echo 1024
00:04:19.356    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.356   10:42:08	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:19.356   10:42:08	-- setup/hugepages.sh@112 -- # get_nodes
00:04:19.357   10:42:08	-- setup/hugepages.sh@27 -- # local node
00:04:19.357   10:42:08	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:19.357   10:42:08	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:19.357   10:42:08	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:19.357   10:42:08	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:19.357   10:42:08	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:19.357   10:42:08	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:19.357   10:42:08	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:19.357   10:42:08	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:19.357    10:42:08	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:19.357    10:42:08	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:19.357    10:42:08	-- setup/common.sh@18 -- # local node=0
00:04:19.357    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.357    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.357    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.357    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:19.357    10:42:08	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:19.357    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.357    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        44853528 kB' 'MemUsed:         3211320 kB' 'SwapCached:            0 kB' 'Active:          1255084 kB' 'Inactive:         171132 kB' 'Active(anon):    1044532 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267672 kB' 'Mapped:            88844 kB' 'AnonPages:        161696 kB' 'Shmem:            885988 kB' 'KernelStack:        8488 kB' 'PageTables:         3044 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80716 kB' 'Slab:             302688 kB' 'SReclaimable:      80716 kB' 'SUnreclaim:       221972 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.357    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.357    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@33 -- # echo 0
00:04:19.358    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.358   10:42:08	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:19.358   10:42:08	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:19.358   10:42:08	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:19.358    10:42:08	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:04:19.358    10:42:08	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:19.358    10:42:08	-- setup/common.sh@18 -- # local node=1
00:04:19.358    10:42:08	-- setup/common.sh@19 -- # local var val
00:04:19.358    10:42:08	-- setup/common.sh@20 -- # local mem_f mem
00:04:19.358    10:42:08	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:19.358    10:42:08	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:04:19.358    10:42:08	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:04:19.358    10:42:08	-- setup/common.sh@28 -- # mapfile -t mem
00:04:19.358    10:42:08	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358     10:42:08	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220584 kB' 'MemFree:        33740340 kB' 'MemUsed:        10480244 kB' 'SwapCached:            0 kB' 'Active:          5155656 kB' 'Inactive:        3520368 kB' 'Active(anon):    4973476 kB' 'Inactive(anon):        0 kB' 'Active(file):     182180 kB' 'Inactive(file):  3520368 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8360936 kB' 'Mapped:            69432 kB' 'AnonPages:        315100 kB' 'Shmem:           4658388 kB' 'KernelStack:        7448 kB' 'PageTables:         4100 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      98260 kB' 'Slab:             332892 kB' 'SReclaimable:      98260 kB' 'SUnreclaim:       234632 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.358    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.358    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # continue
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # IFS=': '
00:04:19.359    10:42:08	-- setup/common.sh@31 -- # read -r var val _
00:04:19.359    10:42:08	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:19.359    10:42:08	-- setup/common.sh@33 -- # echo 0
00:04:19.359    10:42:08	-- setup/common.sh@33 -- # return 0
00:04:19.359   10:42:08	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:19.359   10:42:08	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:19.359   10:42:08	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:19.359   10:42:08	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:04:19.359  node0=512 expecting 512
00:04:19.359   10:42:08	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:19.359   10:42:08	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:19.359   10:42:08	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:19.359   10:42:08	-- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512'
00:04:19.359  node1=512 expecting 512
00:04:19.359   10:42:08	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:04:19.359  
00:04:19.359  real	0m3.524s
00:04:19.359  user	0m1.428s
00:04:19.359  sys	0m2.193s
00:04:19.359   10:42:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:19.359   10:42:08	-- common/autotest_common.sh@10 -- # set +x
00:04:19.359  ************************************
00:04:19.359  END TEST per_node_1G_alloc
00:04:19.359  ************************************
00:04:19.359   10:42:08	-- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc
00:04:19.359   10:42:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:19.359   10:42:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:19.359   10:42:08	-- common/autotest_common.sh@10 -- # set +x
00:04:19.359  ************************************
00:04:19.359  START TEST even_2G_alloc
00:04:19.359  ************************************
00:04:19.359   10:42:08	-- common/autotest_common.sh@1114 -- # even_2G_alloc
00:04:19.359   10:42:08	-- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152
00:04:19.359   10:42:08	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:19.359   10:42:08	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:19.359   10:42:08	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:19.359   10:42:08	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:19.359   10:42:08	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:19.359   10:42:08	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:19.359   10:42:08	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:19.359   10:42:08	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:19.359   10:42:08	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:19.359   10:42:08	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:04:19.359   10:42:08	-- setup/hugepages.sh@83 -- # : 512
00:04:19.359   10:42:08	-- setup/hugepages.sh@84 -- # : 1
00:04:19.359   10:42:08	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:04:19.359   10:42:08	-- setup/hugepages.sh@83 -- # : 0
00:04:19.359   10:42:08	-- setup/hugepages.sh@84 -- # : 0
00:04:19.359   10:42:08	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:19.359   10:42:08	-- setup/hugepages.sh@153 -- # NRHUGE=1024
00:04:19.359   10:42:08	-- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes
00:04:19.359   10:42:08	-- setup/hugepages.sh@153 -- # setup output
00:04:19.359   10:42:08	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:19.359   10:42:08	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:22.651  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:22.651  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:22.651  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:22.651   10:42:11	-- setup/hugepages.sh@154 -- # verify_nr_hugepages
00:04:22.651   10:42:11	-- setup/hugepages.sh@89 -- # local node
00:04:22.651   10:42:11	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:22.651   10:42:11	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:22.651   10:42:11	-- setup/hugepages.sh@92 -- # local surp
00:04:22.651   10:42:11	-- setup/hugepages.sh@93 -- # local resv
00:04:22.651   10:42:11	-- setup/hugepages.sh@94 -- # local anon
00:04:22.652   10:42:11	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:22.652    10:42:11	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:22.652    10:42:11	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:22.652    10:42:11	-- setup/common.sh@18 -- # local node=
00:04:22.652    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.652    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.652    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.652    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:22.652    10:42:11	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:22.652    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.652    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78621304 kB' 'MemAvailable:   82141640 kB' 'Buffers:            8064 kB' 'Cached:          9620596 kB' 'SwapCached:            0 kB' 'Active:          6412132 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019400 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477840 kB' 'Mapped:           158440 kB' 'Shmem:           5544428 kB' 'KReclaimable:     178968 kB' 'Slab:             635476 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456508 kB' 'KernelStack:       15984 kB' 'PageTables:         7380 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.652    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.652    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:22.653    10:42:11	-- setup/common.sh@33 -- # echo 0
00:04:22.653    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.653   10:42:11	-- setup/hugepages.sh@97 -- # anon=0
00:04:22.653    10:42:11	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:22.653    10:42:11	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:22.653    10:42:11	-- setup/common.sh@18 -- # local node=
00:04:22.653    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.653    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.653    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.653    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:22.653    10:42:11	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:22.653    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.653    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78623424 kB' 'MemAvailable:   82143760 kB' 'Buffers:            8064 kB' 'Cached:          9620600 kB' 'SwapCached:            0 kB' 'Active:          6411908 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019176 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477716 kB' 'Mapped:           158428 kB' 'Shmem:           5544432 kB' 'KReclaimable:     178968 kB' 'Slab:             635464 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456496 kB' 'KernelStack:       15984 kB' 'PageTables:         7344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212924 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.653    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.653    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.920    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.920    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.921    10:42:11	-- setup/common.sh@33 -- # echo 0
00:04:22.921    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.921   10:42:11	-- setup/hugepages.sh@99 -- # surp=0
00:04:22.921    10:42:11	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:22.921    10:42:11	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:22.921    10:42:11	-- setup/common.sh@18 -- # local node=
00:04:22.921    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.921    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.921    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.921    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:22.921    10:42:11	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:22.921    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.921    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78624072 kB' 'MemAvailable:   82144408 kB' 'Buffers:            8064 kB' 'Cached:          9620600 kB' 'SwapCached:            0 kB' 'Active:          6411092 kB' 'Inactive:        3691500 kB' 'Active(anon):    6018360 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477376 kB' 'Mapped:           158352 kB' 'Shmem:           5544432 kB' 'KReclaimable:     178968 kB' 'Slab:             635432 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456464 kB' 'KernelStack:       15984 kB' 'PageTables:         7340 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212936 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.921    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.921    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:22.922    10:42:11	-- setup/common.sh@33 -- # echo 0
00:04:22.922    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.922   10:42:11	-- setup/hugepages.sh@100 -- # resv=0
00:04:22.922   10:42:11	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:22.922  nr_hugepages=1024
00:04:22.922   10:42:11	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:22.922  resv_hugepages=0
00:04:22.922   10:42:11	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:22.922  surplus_hugepages=0
00:04:22.922   10:42:11	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:22.922  anon_hugepages=0
00:04:22.922   10:42:11	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:22.922   10:42:11	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:22.922    10:42:11	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:22.922    10:42:11	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:22.922    10:42:11	-- setup/common.sh@18 -- # local node=
00:04:22.922    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.922    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.922    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.922    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:22.922    10:42:11	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:22.922    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.922    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78624044 kB' 'MemAvailable:   82144380 kB' 'Buffers:            8064 kB' 'Cached:          9620640 kB' 'SwapCached:            0 kB' 'Active:          6411084 kB' 'Inactive:        3691500 kB' 'Active(anon):    6018352 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        477288 kB' 'Mapped:           158352 kB' 'Shmem:           5544472 kB' 'KReclaimable:     178968 kB' 'Slab:             635432 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456464 kB' 'KernelStack:       15968 kB' 'PageTables:         7284 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7212952 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199032 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.922    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.922    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.923    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.923    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:22.923    10:42:11	-- setup/common.sh@33 -- # echo 1024
00:04:22.923    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.923   10:42:11	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:22.923   10:42:11	-- setup/hugepages.sh@112 -- # get_nodes
00:04:22.923   10:42:11	-- setup/hugepages.sh@27 -- # local node
00:04:22.923   10:42:11	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:22.923   10:42:11	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:22.923   10:42:11	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:22.923   10:42:11	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:22.923   10:42:11	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:22.923   10:42:11	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:22.923   10:42:11	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:22.923   10:42:11	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:22.923    10:42:11	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:22.923    10:42:11	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:22.923    10:42:11	-- setup/common.sh@18 -- # local node=0
00:04:22.923    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.923    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.924    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.924    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:22.924    10:42:11	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:22.924    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.924    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        44863196 kB' 'MemUsed:         3201652 kB' 'SwapCached:            0 kB' 'Active:          1255248 kB' 'Inactive:         171132 kB' 'Active(anon):    1044696 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267704 kB' 'Mapped:            88912 kB' 'AnonPages:        161908 kB' 'Shmem:            886020 kB' 'KernelStack:        8520 kB' 'PageTables:         3172 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80708 kB' 'Slab:             302684 kB' 'SReclaimable:      80708 kB' 'SUnreclaim:       221976 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.924    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.924    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@33 -- # echo 0
00:04:22.925    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.925   10:42:11	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:22.925   10:42:11	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:22.925   10:42:11	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:22.925    10:42:11	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:04:22.925    10:42:11	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:22.925    10:42:11	-- setup/common.sh@18 -- # local node=1
00:04:22.925    10:42:11	-- setup/common.sh@19 -- # local var val
00:04:22.925    10:42:11	-- setup/common.sh@20 -- # local mem_f mem
00:04:22.925    10:42:11	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:22.925    10:42:11	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:04:22.925    10:42:11	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:04:22.925    10:42:11	-- setup/common.sh@28 -- # mapfile -t mem
00:04:22.925    10:42:11	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925     10:42:11	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220584 kB' 'MemFree:        33760580 kB' 'MemUsed:        10460004 kB' 'SwapCached:            0 kB' 'Active:          5156200 kB' 'Inactive:        3520368 kB' 'Active(anon):    4974020 kB' 'Inactive(anon):        0 kB' 'Active(file):     182180 kB' 'Inactive(file):  3520368 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8361016 kB' 'Mapped:            69440 kB' 'AnonPages:        315784 kB' 'Shmem:           4658468 kB' 'KernelStack:        7464 kB' 'PageTables:         4168 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      98260 kB' 'Slab:             332748 kB' 'SReclaimable:      98260 kB' 'SUnreclaim:       234488 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.925    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.925    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # continue
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # IFS=': '
00:04:22.926    10:42:11	-- setup/common.sh@31 -- # read -r var val _
00:04:22.926    10:42:11	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:22.926    10:42:11	-- setup/common.sh@33 -- # echo 0
00:04:22.926    10:42:11	-- setup/common.sh@33 -- # return 0
00:04:22.926   10:42:11	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:22.926   10:42:11	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:22.926   10:42:11	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:22.926   10:42:11	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:04:22.926  node0=512 expecting 512
00:04:22.926   10:42:11	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:22.926   10:42:11	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:22.926   10:42:11	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:22.926   10:42:11	-- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512'
00:04:22.926  node1=512 expecting 512
00:04:22.926   10:42:11	-- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]]
00:04:22.926  
00:04:22.926  real	0m3.533s
00:04:22.926  user	0m1.290s
00:04:22.926  sys	0m2.340s
00:04:22.926   10:42:11	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:22.926   10:42:11	-- common/autotest_common.sh@10 -- # set +x
00:04:22.926  ************************************
00:04:22.926  END TEST even_2G_alloc
00:04:22.926  ************************************
00:04:22.926   10:42:11	-- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc
00:04:22.926   10:42:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:22.926   10:42:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:22.926   10:42:11	-- common/autotest_common.sh@10 -- # set +x
00:04:22.926  ************************************
00:04:22.926  START TEST odd_alloc
00:04:22.926  ************************************
00:04:22.926   10:42:11	-- common/autotest_common.sh@1114 -- # odd_alloc
00:04:22.926   10:42:11	-- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176
00:04:22.926   10:42:11	-- setup/hugepages.sh@49 -- # local size=2098176
00:04:22.926   10:42:11	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@57 -- # nr_hugepages=1025
00:04:22.926   10:42:11	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:22.926   10:42:11	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:22.926   10:42:11	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:22.926   10:42:11	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1025
00:04:22.926   10:42:11	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:22.926   10:42:11	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:22.926   10:42:11	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:22.926   10:42:11	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512
00:04:22.926   10:42:11	-- setup/hugepages.sh@83 -- # : 513
00:04:22.926   10:42:11	-- setup/hugepages.sh@84 -- # : 1
00:04:22.926   10:42:11	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513
00:04:22.926   10:42:11	-- setup/hugepages.sh@83 -- # : 0
00:04:22.926   10:42:11	-- setup/hugepages.sh@84 -- # : 0
00:04:22.926   10:42:11	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:22.926   10:42:11	-- setup/hugepages.sh@160 -- # HUGEMEM=2049
00:04:22.926   10:42:11	-- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes
00:04:22.926   10:42:11	-- setup/hugepages.sh@160 -- # setup output
00:04:22.926   10:42:11	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:22.926   10:42:11	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:26.313  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:26.313  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:26.313  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:26.313   10:42:15	-- setup/hugepages.sh@161 -- # verify_nr_hugepages
00:04:26.313   10:42:15	-- setup/hugepages.sh@89 -- # local node
00:04:26.313   10:42:15	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:26.313   10:42:15	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:26.313   10:42:15	-- setup/hugepages.sh@92 -- # local surp
00:04:26.313   10:42:15	-- setup/hugepages.sh@93 -- # local resv
00:04:26.313   10:42:15	-- setup/hugepages.sh@94 -- # local anon
00:04:26.313   10:42:15	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:26.313    10:42:15	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:26.313    10:42:15	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:26.313    10:42:15	-- setup/common.sh@18 -- # local node=
00:04:26.313    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.313    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.313    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.313    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:26.313    10:42:15	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:26.313    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.313    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78631088 kB' 'MemAvailable:   82151424 kB' 'Buffers:            8064 kB' 'Cached:          9620712 kB' 'SwapCached:            0 kB' 'Active:          6412448 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019716 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478488 kB' 'Mapped:           158460 kB' 'Shmem:           5544544 kB' 'KReclaimable:     178968 kB' 'Slab:             635324 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456356 kB' 'KernelStack:       15968 kB' 'PageTables:         7288 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481720 kB' 'Committed_AS:    7213280 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.313    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.313    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:26.314    10:42:15	-- setup/common.sh@33 -- # echo 0
00:04:26.314    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.314   10:42:15	-- setup/hugepages.sh@97 -- # anon=0
00:04:26.314    10:42:15	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:26.314    10:42:15	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:26.314    10:42:15	-- setup/common.sh@18 -- # local node=
00:04:26.314    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.314    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.314    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.314    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:26.314    10:42:15	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:26.314    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.314    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78641764 kB' 'MemAvailable:   82162100 kB' 'Buffers:            8064 kB' 'Cached:          9620712 kB' 'SwapCached:            0 kB' 'Active:          6412564 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019832 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478700 kB' 'Mapped:           158372 kB' 'Shmem:           5544544 kB' 'KReclaimable:     178968 kB' 'Slab:             635284 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456316 kB' 'KernelStack:       16048 kB' 'PageTables:         7504 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481720 kB' 'Committed_AS:    7216084 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198984 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.314    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.314    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.315    10:42:15	-- setup/common.sh@33 -- # echo 0
00:04:26.315    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.315   10:42:15	-- setup/hugepages.sh@99 -- # surp=0
00:04:26.315    10:42:15	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:26.315    10:42:15	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:26.315    10:42:15	-- setup/common.sh@18 -- # local node=
00:04:26.315    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.315    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.315    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.315    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:26.315    10:42:15	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:26.315    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.315    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78642792 kB' 'MemAvailable:   82163128 kB' 'Buffers:            8064 kB' 'Cached:          9620724 kB' 'SwapCached:            0 kB' 'Active:          6412476 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019744 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478656 kB' 'Mapped:           158376 kB' 'Shmem:           5544556 kB' 'KReclaimable:     178968 kB' 'Slab:             635284 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456316 kB' 'KernelStack:       15968 kB' 'PageTables:         7272 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481720 kB' 'Committed_AS:    7216244 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198952 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.315    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.315    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:26.316    10:42:15	-- setup/common.sh@33 -- # echo 0
00:04:26.316    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.316   10:42:15	-- setup/hugepages.sh@100 -- # resv=0
00:04:26.316   10:42:15	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1025
00:04:26.316  nr_hugepages=1025
00:04:26.316   10:42:15	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:26.316  resv_hugepages=0
00:04:26.316   10:42:15	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:26.316  surplus_hugepages=0
00:04:26.316   10:42:15	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:26.316  anon_hugepages=0
00:04:26.316   10:42:15	-- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv ))
00:04:26.316   10:42:15	-- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages ))
00:04:26.316    10:42:15	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:26.316    10:42:15	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:26.316    10:42:15	-- setup/common.sh@18 -- # local node=
00:04:26.316    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.316    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.316    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.316    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:26.316    10:42:15	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:26.316    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.316    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78641244 kB' 'MemAvailable:   82161580 kB' 'Buffers:            8064 kB' 'Cached:          9620740 kB' 'SwapCached:            0 kB' 'Active:          6413408 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020676 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        479572 kB' 'Mapped:           158408 kB' 'Shmem:           5544572 kB' 'KReclaimable:     178968 kB' 'Slab:             635284 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456316 kB' 'KernelStack:       16288 kB' 'PageTables:         8088 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53481720 kB' 'Committed_AS:    7217512 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199128 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1025' 'HugePages_Free:     1025' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2099200 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.316    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.316    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.577    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.577    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:26.578    10:42:15	-- setup/common.sh@33 -- # echo 1025
00:04:26.578    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.578   10:42:15	-- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv ))
00:04:26.578   10:42:15	-- setup/hugepages.sh@112 -- # get_nodes
00:04:26.578   10:42:15	-- setup/hugepages.sh@27 -- # local node
00:04:26.578   10:42:15	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:26.578   10:42:15	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:26.578   10:42:15	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:26.578   10:42:15	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513
00:04:26.578   10:42:15	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:26.578   10:42:15	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:26.578   10:42:15	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:26.578   10:42:15	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:26.578    10:42:15	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:26.578    10:42:15	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:26.578    10:42:15	-- setup/common.sh@18 -- # local node=0
00:04:26.578    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.578    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.578    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.578    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:26.578    10:42:15	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:26.578    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.578    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        44891520 kB' 'MemUsed:         3173328 kB' 'SwapCached:            0 kB' 'Active:          1256796 kB' 'Inactive:         171132 kB' 'Active(anon):    1046244 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267736 kB' 'Mapped:            88924 kB' 'AnonPages:        163448 kB' 'Shmem:            886052 kB' 'KernelStack:        8632 kB' 'PageTables:         3568 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80708 kB' 'Slab:             302596 kB' 'SReclaimable:      80708 kB' 'SUnreclaim:       221888 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.578    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.578    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@33 -- # echo 0
00:04:26.579    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.579   10:42:15	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:26.579   10:42:15	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:26.579   10:42:15	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:26.579    10:42:15	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:04:26.579    10:42:15	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:26.579    10:42:15	-- setup/common.sh@18 -- # local node=1
00:04:26.579    10:42:15	-- setup/common.sh@19 -- # local var val
00:04:26.579    10:42:15	-- setup/common.sh@20 -- # local mem_f mem
00:04:26.579    10:42:15	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:26.579    10:42:15	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:04:26.579    10:42:15	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:04:26.579    10:42:15	-- setup/common.sh@28 -- # mapfile -t mem
00:04:26.579    10:42:15	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579     10:42:15	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220584 kB' 'MemFree:        33750092 kB' 'MemUsed:        10470492 kB' 'SwapCached:            0 kB' 'Active:          5155856 kB' 'Inactive:        3520368 kB' 'Active(anon):    4973676 kB' 'Inactive(anon):        0 kB' 'Active(file):     182180 kB' 'Inactive(file):  3520368 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8361100 kB' 'Mapped:            69452 kB' 'AnonPages:        315196 kB' 'Shmem:           4658552 kB' 'KernelStack:        7432 kB' 'PageTables:         4072 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      98260 kB' 'Slab:             332676 kB' 'SReclaimable:      98260 kB' 'SUnreclaim:       234416 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   513' 'HugePages_Free:    513' 'HugePages_Surp:      0'
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.579    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.579    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # continue
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # IFS=': '
00:04:26.580    10:42:15	-- setup/common.sh@31 -- # read -r var val _
00:04:26.580    10:42:15	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:26.580    10:42:15	-- setup/common.sh@33 -- # echo 0
00:04:26.580    10:42:15	-- setup/common.sh@33 -- # return 0
00:04:26.580   10:42:15	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:26.580   10:42:15	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:26.580   10:42:15	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513'
00:04:26.580  node0=512 expecting 513
00:04:26.580   10:42:15	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:26.580   10:42:15	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:26.580   10:42:15	-- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512'
00:04:26.580  node1=513 expecting 512
00:04:26.580   10:42:15	-- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]]
00:04:26.580  
00:04:26.580  real	0m3.540s
00:04:26.580  user	0m1.359s
00:04:26.580  sys	0m2.278s
00:04:26.580   10:42:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:26.580   10:42:15	-- common/autotest_common.sh@10 -- # set +x
00:04:26.580  ************************************
00:04:26.580  END TEST odd_alloc
00:04:26.580  ************************************
00:04:26.580   10:42:15	-- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc
00:04:26.580   10:42:15	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:26.580   10:42:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:26.580   10:42:15	-- common/autotest_common.sh@10 -- # set +x
00:04:26.580  ************************************
00:04:26.580  START TEST custom_alloc
00:04:26.580  ************************************
00:04:26.580   10:42:15	-- common/autotest_common.sh@1114 -- # custom_alloc
00:04:26.580   10:42:15	-- setup/hugepages.sh@167 -- # local IFS=,
00:04:26.580   10:42:15	-- setup/hugepages.sh@169 -- # local node
00:04:26.580   10:42:15	-- setup/hugepages.sh@170 -- # nodes_hp=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@170 -- # local nodes_hp
00:04:26.580   10:42:15	-- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0
00:04:26.580   10:42:15	-- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576
00:04:26.580   10:42:15	-- setup/hugepages.sh@49 -- # local size=1048576
00:04:26.580   10:42:15	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@57 -- # nr_hugepages=512
00:04:26.580   10:42:15	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:26.580   10:42:15	-- setup/hugepages.sh@64 -- # local _nr_hugepages=512
00:04:26.580   10:42:15	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:26.580   10:42:15	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@74 -- # (( 0 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256
00:04:26.580   10:42:15	-- setup/hugepages.sh@83 -- # : 256
00:04:26.580   10:42:15	-- setup/hugepages.sh@84 -- # : 1
00:04:26.580   10:42:15	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256
00:04:26.580   10:42:15	-- setup/hugepages.sh@83 -- # : 0
00:04:26.580   10:42:15	-- setup/hugepages.sh@84 -- # : 0
00:04:26.580   10:42:15	-- setup/hugepages.sh@81 -- # (( _no_nodes > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@175 -- # nodes_hp[0]=512
00:04:26.580   10:42:15	-- setup/hugepages.sh@176 -- # (( 2 > 1 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152
00:04:26.580   10:42:15	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:26.580   10:42:15	-- setup/hugepages.sh@50 -- # (( 1 > 1 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:26.580   10:42:15	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:26.580   10:42:15	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:26.580   10:42:15	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:26.580   10:42:15	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@74 -- # (( 1 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:04:26.580   10:42:15	-- setup/hugepages.sh@78 -- # return 0
00:04:26.580   10:42:15	-- setup/hugepages.sh@178 -- # nodes_hp[1]=1024
00:04:26.580   10:42:15	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:04:26.580   10:42:15	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}")
00:04:26.580   10:42:15	-- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # user_nodes=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:26.580   10:42:15	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:26.580   10:42:15	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:26.580   10:42:15	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:26.580   10:42:15	-- setup/hugepages.sh@69 -- # (( 0 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@74 -- # (( 2 > 0 ))
00:04:26.580   10:42:15	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512
00:04:26.580   10:42:15	-- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}"
00:04:26.580   10:42:15	-- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024
00:04:26.580   10:42:15	-- setup/hugepages.sh@78 -- # return 0
00:04:26.580   10:42:15	-- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024'
00:04:26.580   10:42:15	-- setup/hugepages.sh@187 -- # setup output
00:04:26.580   10:42:15	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:26.580   10:42:15	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:29.867  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:29.867  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:29.867  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:29.868  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:29.868  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:29.868  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:29.868  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:29.868  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:29.868   10:42:18	-- setup/hugepages.sh@188 -- # nr_hugepages=1536
00:04:29.868   10:42:18	-- setup/hugepages.sh@188 -- # verify_nr_hugepages
00:04:29.868   10:42:18	-- setup/hugepages.sh@89 -- # local node
00:04:29.868   10:42:18	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:29.868   10:42:18	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:29.868   10:42:18	-- setup/hugepages.sh@92 -- # local surp
00:04:29.868   10:42:18	-- setup/hugepages.sh@93 -- # local resv
00:04:29.868   10:42:18	-- setup/hugepages.sh@94 -- # local anon
00:04:29.868   10:42:18	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:29.868    10:42:18	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:29.868    10:42:18	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:29.868    10:42:18	-- setup/common.sh@18 -- # local node=
00:04:29.868    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:29.868    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:29.868    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:29.868    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:29.868    10:42:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:29.868    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:29.868    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        77588928 kB' 'MemAvailable:   81109264 kB' 'Buffers:            8064 kB' 'Cached:          9620824 kB' 'SwapCached:            0 kB' 'Active:          6412860 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020128 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478736 kB' 'Mapped:           158452 kB' 'Shmem:           5544656 kB' 'KReclaimable:     178968 kB' 'Slab:             634964 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       455996 kB' 'KernelStack:       16000 kB' 'PageTables:         7348 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958456 kB' 'Committed_AS:    7213792 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:29.868    10:42:18	-- setup/common.sh@33 -- # echo 0
00:04:29.868    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:29.868   10:42:18	-- setup/hugepages.sh@97 -- # anon=0
00:04:29.868    10:42:18	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:29.868    10:42:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:29.868    10:42:18	-- setup/common.sh@18 -- # local node=
00:04:29.868    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:29.868    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:29.868    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:29.868    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:29.868    10:42:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:29.868    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:29.868    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        77589260 kB' 'MemAvailable:   81109596 kB' 'Buffers:            8064 kB' 'Cached:          9620828 kB' 'SwapCached:            0 kB' 'Active:          6412652 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019920 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478476 kB' 'Mapped:           158360 kB' 'Shmem:           5544660 kB' 'KReclaimable:     178968 kB' 'Slab:             634940 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       455972 kB' 'KernelStack:       16000 kB' 'PageTables:         7336 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958456 kB' 'Committed_AS:    7213804 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      198984 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.868    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.868    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:29.869    10:42:18	-- setup/common.sh@33 -- # echo 0
00:04:29.869    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:29.869   10:42:18	-- setup/hugepages.sh@99 -- # surp=0
00:04:29.869    10:42:18	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:29.869    10:42:18	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:29.869    10:42:18	-- setup/common.sh@18 -- # local node=
00:04:29.869    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:29.869    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:29.869    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:29.869    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:29.869    10:42:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:29.869    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:29.869    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.869    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.869     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        77588960 kB' 'MemAvailable:   81109296 kB' 'Buffers:            8064 kB' 'Cached:          9620840 kB' 'SwapCached:            0 kB' 'Active:          6412636 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019904 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478476 kB' 'Mapped:           158360 kB' 'Shmem:           5544672 kB' 'KReclaimable:     178968 kB' 'Slab:             634940 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       455972 kB' 'KernelStack:       16000 kB' 'PageTables:         7336 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958456 kB' 'Committed_AS:    7213816 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:29.869    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:29.870    10:42:18	-- setup/common.sh@32 -- # continue
00:04:29.870    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.132    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.132    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:30.133    10:42:18	-- setup/common.sh@33 -- # echo 0
00:04:30.133    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:30.133   10:42:18	-- setup/hugepages.sh@100 -- # resv=0
00:04:30.133   10:42:18	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1536
00:04:30.133  nr_hugepages=1536
00:04:30.133   10:42:18	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:30.133  resv_hugepages=0
00:04:30.133   10:42:18	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:30.133  surplus_hugepages=0
00:04:30.133   10:42:18	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:30.133  anon_hugepages=0
00:04:30.133   10:42:18	-- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv ))
00:04:30.133   10:42:18	-- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages ))
00:04:30.133    10:42:18	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:30.133    10:42:18	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:30.133    10:42:18	-- setup/common.sh@18 -- # local node=
00:04:30.133    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:30.133    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:30.133    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:30.133    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:30.133    10:42:18	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:30.133    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:30.133    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        77589968 kB' 'MemAvailable:   81110304 kB' 'Buffers:            8064 kB' 'Cached:          9620868 kB' 'SwapCached:            0 kB' 'Active:          6412300 kB' 'Inactive:        3691500 kB' 'Active(anon):    6019568 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478068 kB' 'Mapped:           158360 kB' 'Shmem:           5544700 kB' 'KReclaimable:     178968 kB' 'Slab:             634940 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       455972 kB' 'KernelStack:       15984 kB' 'PageTables:         7280 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    52958456 kB' 'Committed_AS:    7213832 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199000 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1536' 'HugePages_Free:     1536' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         3145728 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.133    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.133    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:30.134    10:42:18	-- setup/common.sh@33 -- # echo 1536
00:04:30.134    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:30.134   10:42:18	-- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv ))
00:04:30.134   10:42:18	-- setup/hugepages.sh@112 -- # get_nodes
00:04:30.134   10:42:18	-- setup/hugepages.sh@27 -- # local node
00:04:30.134   10:42:18	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:30.134   10:42:18	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512
00:04:30.134   10:42:18	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:30.134   10:42:18	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:30.134   10:42:18	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:30.134   10:42:18	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:30.134   10:42:18	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:30.134   10:42:18	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:30.134    10:42:18	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:30.134    10:42:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:30.134    10:42:18	-- setup/common.sh@18 -- # local node=0
00:04:30.134    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:30.134    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:30.134    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:30.134    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:30.134    10:42:18	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:30.134    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:30.134    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        44867388 kB' 'MemUsed:         3197460 kB' 'SwapCached:            0 kB' 'Active:          1256308 kB' 'Inactive:         171132 kB' 'Active(anon):    1045756 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267756 kB' 'Mapped:            88912 kB' 'AnonPages:        162804 kB' 'Shmem:            886072 kB' 'KernelStack:        8552 kB' 'PageTables:         3212 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80708 kB' 'Slab:             302408 kB' 'SReclaimable:      80708 kB' 'SUnreclaim:       221700 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:   512' 'HugePages_Free:    512' 'HugePages_Surp:      0'
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.134    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.134    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@33 -- # echo 0
00:04:30.135    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:30.135   10:42:18	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:30.135   10:42:18	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:30.135   10:42:18	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:30.135    10:42:18	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1
00:04:30.135    10:42:18	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:30.135    10:42:18	-- setup/common.sh@18 -- # local node=1
00:04:30.135    10:42:18	-- setup/common.sh@19 -- # local var val
00:04:30.135    10:42:18	-- setup/common.sh@20 -- # local mem_f mem
00:04:30.135    10:42:18	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:30.135    10:42:18	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]]
00:04:30.135    10:42:18	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo
00:04:30.135    10:42:18	-- setup/common.sh@28 -- # mapfile -t mem
00:04:30.135    10:42:18	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135     10:42:18	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       44220584 kB' 'MemFree:        32722600 kB' 'MemUsed:        11497984 kB' 'SwapCached:            0 kB' 'Active:          5156724 kB' 'Inactive:        3520368 kB' 'Active(anon):    4974544 kB' 'Inactive(anon):        0 kB' 'Active(file):     182180 kB' 'Inactive(file):  3520368 kB' 'Unevictable:           0 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       8361192 kB' 'Mapped:            69448 kB' 'AnonPages:        316004 kB' 'Shmem:           4658644 kB' 'KernelStack:        7464 kB' 'PageTables:         4176 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      98260 kB' 'Slab:             332532 kB' 'SReclaimable:      98260 kB' 'SUnreclaim:       234272 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.135    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.135    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # continue
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # IFS=': '
00:04:30.136    10:42:18	-- setup/common.sh@31 -- # read -r var val _
00:04:30.136    10:42:18	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:30.136    10:42:18	-- setup/common.sh@33 -- # echo 0
00:04:30.136    10:42:18	-- setup/common.sh@33 -- # return 0
00:04:30.136   10:42:18	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:30.136   10:42:18	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:30.136   10:42:18	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:30.136   10:42:18	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:30.136   10:42:18	-- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512'
00:04:30.136  node0=512 expecting 512
00:04:30.136   10:42:18	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:30.136   10:42:18	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:30.136   10:42:18	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:30.136   10:42:18	-- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024'
00:04:30.136  node1=1024 expecting 1024
00:04:30.136   10:42:18	-- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]]
00:04:30.136  
00:04:30.136  real	0m3.531s
00:04:30.136  user	0m1.351s
00:04:30.136  sys	0m2.270s
00:04:30.136   10:42:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:30.136   10:42:18	-- common/autotest_common.sh@10 -- # set +x
00:04:30.136  ************************************
00:04:30.136  END TEST custom_alloc
00:04:30.136  ************************************
00:04:30.136   10:42:19	-- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc
00:04:30.136   10:42:19	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:30.136   10:42:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:30.136   10:42:19	-- common/autotest_common.sh@10 -- # set +x
00:04:30.136  ************************************
00:04:30.136  START TEST no_shrink_alloc
00:04:30.136  ************************************
00:04:30.136   10:42:19	-- common/autotest_common.sh@1114 -- # no_shrink_alloc
00:04:30.136   10:42:19	-- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0
00:04:30.137   10:42:19	-- setup/hugepages.sh@49 -- # local size=2097152
00:04:30.137   10:42:19	-- setup/hugepages.sh@50 -- # (( 2 > 1 ))
00:04:30.137   10:42:19	-- setup/hugepages.sh@51 -- # shift
00:04:30.137   10:42:19	-- setup/hugepages.sh@52 -- # node_ids=('0')
00:04:30.137   10:42:19	-- setup/hugepages.sh@52 -- # local node_ids
00:04:30.137   10:42:19	-- setup/hugepages.sh@55 -- # (( size >= default_hugepages ))
00:04:30.137   10:42:19	-- setup/hugepages.sh@57 -- # nr_hugepages=1024
00:04:30.137   10:42:19	-- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0
00:04:30.137   10:42:19	-- setup/hugepages.sh@62 -- # user_nodes=('0')
00:04:30.137   10:42:19	-- setup/hugepages.sh@62 -- # local user_nodes
00:04:30.137   10:42:19	-- setup/hugepages.sh@64 -- # local _nr_hugepages=1024
00:04:30.137   10:42:19	-- setup/hugepages.sh@65 -- # local _no_nodes=2
00:04:30.137   10:42:19	-- setup/hugepages.sh@67 -- # nodes_test=()
00:04:30.137   10:42:19	-- setup/hugepages.sh@67 -- # local -g nodes_test
00:04:30.137   10:42:19	-- setup/hugepages.sh@69 -- # (( 1 > 0 ))
00:04:30.137   10:42:19	-- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}"
00:04:30.137   10:42:19	-- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024
00:04:30.137   10:42:19	-- setup/hugepages.sh@73 -- # return 0
00:04:30.137   10:42:19	-- setup/hugepages.sh@198 -- # setup output
00:04:30.137   10:42:19	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:30.137   10:42:19	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:33.428  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:33.428  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:33.428  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:33.428   10:42:22	-- setup/hugepages.sh@199 -- # verify_nr_hugepages
00:04:33.428   10:42:22	-- setup/hugepages.sh@89 -- # local node
00:04:33.428   10:42:22	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:33.428   10:42:22	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:33.428   10:42:22	-- setup/hugepages.sh@92 -- # local surp
00:04:33.428   10:42:22	-- setup/hugepages.sh@93 -- # local resv
00:04:33.428   10:42:22	-- setup/hugepages.sh@94 -- # local anon
00:04:33.428   10:42:22	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:33.428    10:42:22	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:33.428    10:42:22	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:33.428    10:42:22	-- setup/common.sh@18 -- # local node=
00:04:33.428    10:42:22	-- setup/common.sh@19 -- # local var val
00:04:33.428    10:42:22	-- setup/common.sh@20 -- # local mem_f mem
00:04:33.428    10:42:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:33.428    10:42:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:33.428    10:42:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:33.428    10:42:22	-- setup/common.sh@28 -- # mapfile -t mem
00:04:33.428    10:42:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428     10:42:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78610928 kB' 'MemAvailable:   82131264 kB' 'Buffers:            8064 kB' 'Cached:          9620940 kB' 'SwapCached:            0 kB' 'Active:          6414452 kB' 'Inactive:        3691500 kB' 'Active(anon):    6021720 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        479760 kB' 'Mapped:           158420 kB' 'Shmem:           5544772 kB' 'KReclaimable:     178968 kB' 'Slab:             635360 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456392 kB' 'KernelStack:       16352 kB' 'PageTables:         7944 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7218492 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199288 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.428    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.428    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:33.429    10:42:22	-- setup/common.sh@33 -- # echo 0
00:04:33.429    10:42:22	-- setup/common.sh@33 -- # return 0
00:04:33.429   10:42:22	-- setup/hugepages.sh@97 -- # anon=0
00:04:33.429    10:42:22	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:33.429    10:42:22	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:33.429    10:42:22	-- setup/common.sh@18 -- # local node=
00:04:33.429    10:42:22	-- setup/common.sh@19 -- # local var val
00:04:33.429    10:42:22	-- setup/common.sh@20 -- # local mem_f mem
00:04:33.429    10:42:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:33.429    10:42:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:33.429    10:42:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:33.429    10:42:22	-- setup/common.sh@28 -- # mapfile -t mem
00:04:33.429    10:42:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429     10:42:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78616360 kB' 'MemAvailable:   82136696 kB' 'Buffers:            8064 kB' 'Cached:          9620940 kB' 'SwapCached:            0 kB' 'Active:          6414988 kB' 'Inactive:        3691500 kB' 'Active(anon):    6022256 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        480316 kB' 'Mapped:           158496 kB' 'Shmem:           5544772 kB' 'KReclaimable:     178968 kB' 'Slab:             635424 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456456 kB' 'KernelStack:       16304 kB' 'PageTables:         8576 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7218504 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199240 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.429    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.429    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.430    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.430    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.431    10:42:22	-- setup/common.sh@33 -- # echo 0
00:04:33.431    10:42:22	-- setup/common.sh@33 -- # return 0
00:04:33.431   10:42:22	-- setup/hugepages.sh@99 -- # surp=0
00:04:33.431    10:42:22	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:33.431    10:42:22	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:33.431    10:42:22	-- setup/common.sh@18 -- # local node=
00:04:33.431    10:42:22	-- setup/common.sh@19 -- # local var val
00:04:33.431    10:42:22	-- setup/common.sh@20 -- # local mem_f mem
00:04:33.431    10:42:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:33.431    10:42:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:33.431    10:42:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:33.431    10:42:22	-- setup/common.sh@28 -- # mapfile -t mem
00:04:33.431    10:42:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431     10:42:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78618244 kB' 'MemAvailable:   82138580 kB' 'Buffers:            8064 kB' 'Cached:          9620968 kB' 'SwapCached:            0 kB' 'Active:          6413332 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020600 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        479040 kB' 'Mapped:           158396 kB' 'Shmem:           5544800 kB' 'KReclaimable:     178968 kB' 'Slab:             635420 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456452 kB' 'KernelStack:       16016 kB' 'PageTables:         7040 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7214328 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.431    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.431    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.432    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.432    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:33.432    10:42:22	-- setup/common.sh@33 -- # echo 0
00:04:33.432    10:42:22	-- setup/common.sh@33 -- # return 0
00:04:33.432   10:42:22	-- setup/hugepages.sh@100 -- # resv=0
00:04:33.693   10:42:22	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:33.693  nr_hugepages=1024
00:04:33.693   10:42:22	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:33.693  resv_hugepages=0
00:04:33.693   10:42:22	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:33.693  surplus_hugepages=0
00:04:33.693   10:42:22	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:33.693  anon_hugepages=0
00:04:33.693   10:42:22	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:33.693   10:42:22	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:33.693    10:42:22	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:33.693    10:42:22	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:33.693    10:42:22	-- setup/common.sh@18 -- # local node=
00:04:33.693    10:42:22	-- setup/common.sh@19 -- # local var val
00:04:33.693    10:42:22	-- setup/common.sh@20 -- # local mem_f mem
00:04:33.693    10:42:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:33.693    10:42:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:33.693    10:42:22	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:33.693    10:42:22	-- setup/common.sh@28 -- # mapfile -t mem
00:04:33.693    10:42:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693     10:42:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78619300 kB' 'MemAvailable:   82139636 kB' 'Buffers:            8064 kB' 'Cached:          9620972 kB' 'SwapCached:            0 kB' 'Active:          6413284 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020552 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        478940 kB' 'Mapped:           158392 kB' 'Shmem:           5544804 kB' 'KReclaimable:     178968 kB' 'Slab:             635356 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456388 kB' 'KernelStack:       16000 kB' 'PageTables:         7332 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7214984 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.693    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.693    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.694    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:33.694    10:42:22	-- setup/common.sh@33 -- # echo 1024
00:04:33.694    10:42:22	-- setup/common.sh@33 -- # return 0
00:04:33.694   10:42:22	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:33.694   10:42:22	-- setup/hugepages.sh@112 -- # get_nodes
00:04:33.694   10:42:22	-- setup/hugepages.sh@27 -- # local node
00:04:33.694   10:42:22	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:33.694   10:42:22	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:33.694   10:42:22	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:33.694   10:42:22	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:04:33.694   10:42:22	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:33.694   10:42:22	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:33.694   10:42:22	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:33.694   10:42:22	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:33.694    10:42:22	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:33.694    10:42:22	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:33.694    10:42:22	-- setup/common.sh@18 -- # local node=0
00:04:33.694    10:42:22	-- setup/common.sh@19 -- # local var val
00:04:33.694    10:42:22	-- setup/common.sh@20 -- # local mem_f mem
00:04:33.694    10:42:22	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:33.694    10:42:22	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:33.694    10:42:22	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:33.694    10:42:22	-- setup/common.sh@28 -- # mapfile -t mem
00:04:33.694    10:42:22	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.694    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695     10:42:22	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43819596 kB' 'MemUsed:         4245252 kB' 'SwapCached:            0 kB' 'Active:          1258904 kB' 'Inactive:         171132 kB' 'Active(anon):    1048352 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267776 kB' 'Mapped:            88912 kB' 'AnonPages:        165372 kB' 'Shmem:            886092 kB' 'KernelStack:        8536 kB' 'PageTables:         3168 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80708 kB' 'Slab:             302696 kB' 'SReclaimable:      80708 kB' 'SUnreclaim:       221988 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.695    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.695    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # continue
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # IFS=': '
00:04:33.696    10:42:22	-- setup/common.sh@31 -- # read -r var val _
00:04:33.696    10:42:22	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:33.696    10:42:22	-- setup/common.sh@33 -- # echo 0
00:04:33.696    10:42:22	-- setup/common.sh@33 -- # return 0
00:04:33.696   10:42:22	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:33.696   10:42:22	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:33.696   10:42:22	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:33.696   10:42:22	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:33.696   10:42:22	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:33.696  node0=1024 expecting 1024
00:04:33.696   10:42:22	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:33.696   10:42:22	-- setup/hugepages.sh@202 -- # CLEAR_HUGE=no
00:04:33.696   10:42:22	-- setup/hugepages.sh@202 -- # NRHUGE=512
00:04:33.696   10:42:22	-- setup/hugepages.sh@202 -- # setup output
00:04:33.696   10:42:22	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:33.696   10:42:22	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:04:36.989  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:04:36.989  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:04:36.989  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:04:36.989  INFO: Requested 512 hugepages but 1024 already allocated on node0
00:04:36.989   10:42:25	-- setup/hugepages.sh@204 -- # verify_nr_hugepages
00:04:36.989   10:42:25	-- setup/hugepages.sh@89 -- # local node
00:04:36.989   10:42:25	-- setup/hugepages.sh@90 -- # local sorted_t
00:04:36.989   10:42:25	-- setup/hugepages.sh@91 -- # local sorted_s
00:04:36.989   10:42:25	-- setup/hugepages.sh@92 -- # local surp
00:04:36.989   10:42:25	-- setup/hugepages.sh@93 -- # local resv
00:04:36.989   10:42:25	-- setup/hugepages.sh@94 -- # local anon
00:04:36.989   10:42:25	-- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]]
00:04:36.989    10:42:25	-- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages
00:04:36.989    10:42:25	-- setup/common.sh@17 -- # local get=AnonHugePages
00:04:36.989    10:42:25	-- setup/common.sh@18 -- # local node=
00:04:36.989    10:42:25	-- setup/common.sh@19 -- # local var val
00:04:36.989    10:42:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:36.989    10:42:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:36.989    10:42:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:36.989    10:42:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:36.989    10:42:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:36.989    10:42:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989     10:42:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78648716 kB' 'MemAvailable:   82169052 kB' 'Buffers:            8064 kB' 'Cached:          9621044 kB' 'SwapCached:            0 kB' 'Active:          6413332 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020600 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        479020 kB' 'Mapped:           158468 kB' 'Shmem:           5544876 kB' 'KReclaimable:     178968 kB' 'Slab:             635564 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456596 kB' 'KernelStack:       16000 kB' 'PageTables:         7344 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7214664 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199048 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.989    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.989    10:42:25	-- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]]
00:04:36.990    10:42:25	-- setup/common.sh@33 -- # echo 0
00:04:36.990    10:42:25	-- setup/common.sh@33 -- # return 0
00:04:36.990   10:42:25	-- setup/hugepages.sh@97 -- # anon=0
00:04:36.990    10:42:25	-- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp
00:04:36.990    10:42:25	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:36.990    10:42:25	-- setup/common.sh@18 -- # local node=
00:04:36.990    10:42:25	-- setup/common.sh@19 -- # local var val
00:04:36.990    10:42:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:36.990    10:42:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:36.990    10:42:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:36.990    10:42:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:36.990    10:42:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:36.990    10:42:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990     10:42:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78648352 kB' 'MemAvailable:   82168688 kB' 'Buffers:            8064 kB' 'Cached:          9621044 kB' 'SwapCached:            0 kB' 'Active:          6415196 kB' 'Inactive:        3691500 kB' 'Active(anon):    6022464 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        480824 kB' 'Mapped:           158896 kB' 'Shmem:           5544876 kB' 'KReclaimable:     178968 kB' 'Slab:             635512 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456544 kB' 'KernelStack:       15984 kB' 'PageTables:         7288 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7216956 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.990    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.990    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.991    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.991    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.992    10:42:25	-- setup/common.sh@33 -- # echo 0
00:04:36.992    10:42:25	-- setup/common.sh@33 -- # return 0
00:04:36.992   10:42:25	-- setup/hugepages.sh@99 -- # surp=0
00:04:36.992    10:42:25	-- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd
00:04:36.992    10:42:25	-- setup/common.sh@17 -- # local get=HugePages_Rsvd
00:04:36.992    10:42:25	-- setup/common.sh@18 -- # local node=
00:04:36.992    10:42:25	-- setup/common.sh@19 -- # local var val
00:04:36.992    10:42:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:36.992    10:42:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:36.992    10:42:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:36.992    10:42:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:36.992    10:42:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:36.992    10:42:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:36.992     10:42:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78644092 kB' 'MemAvailable:   82164428 kB' 'Buffers:            8064 kB' 'Cached:          9621056 kB' 'SwapCached:            0 kB' 'Active:          6419304 kB' 'Inactive:        3691500 kB' 'Active(anon):    6026572 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        484924 kB' 'Mapped:           159176 kB' 'Shmem:           5544888 kB' 'KReclaimable:     178968 kB' 'Slab:             635504 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456536 kB' 'KernelStack:       16000 kB' 'PageTables:         7364 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7220808 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199020 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.992    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.992    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]]
00:04:36.993    10:42:25	-- setup/common.sh@33 -- # echo 0
00:04:36.993    10:42:25	-- setup/common.sh@33 -- # return 0
00:04:36.993   10:42:25	-- setup/hugepages.sh@100 -- # resv=0
00:04:36.993   10:42:25	-- setup/hugepages.sh@102 -- # echo nr_hugepages=1024
00:04:36.993  nr_hugepages=1024
00:04:36.993   10:42:25	-- setup/hugepages.sh@103 -- # echo resv_hugepages=0
00:04:36.993  resv_hugepages=0
00:04:36.993   10:42:25	-- setup/hugepages.sh@104 -- # echo surplus_hugepages=0
00:04:36.993  surplus_hugepages=0
00:04:36.993   10:42:25	-- setup/hugepages.sh@105 -- # echo anon_hugepages=0
00:04:36.993  anon_hugepages=0
00:04:36.993   10:42:25	-- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:36.993   10:42:25	-- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages ))
00:04:36.993    10:42:25	-- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total
00:04:36.993    10:42:25	-- setup/common.sh@17 -- # local get=HugePages_Total
00:04:36.993    10:42:25	-- setup/common.sh@18 -- # local node=
00:04:36.993    10:42:25	-- setup/common.sh@19 -- # local var val
00:04:36.993    10:42:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:36.993    10:42:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:36.993    10:42:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]]
00:04:36.993    10:42:25	-- setup/common.sh@25 -- # [[ -n '' ]]
00:04:36.993    10:42:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:36.993    10:42:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993     10:42:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       92285432 kB' 'MemFree:        78644092 kB' 'MemAvailable:   82164428 kB' 'Buffers:            8064 kB' 'Cached:          9621072 kB' 'SwapCached:            0 kB' 'Active:          6413660 kB' 'Inactive:        3691500 kB' 'Active(anon):    6020928 kB' 'Inactive(anon):        0 kB' 'Active(file):     392732 kB' 'Inactive(file):  3691500 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'SwapTotal:       8388604 kB' 'SwapFree:        8388604 kB' 'Zswap:                 0 kB' 'Zswapped:              0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'AnonPages:        479272 kB' 'Mapped:           158392 kB' 'Shmem:           5544904 kB' 'KReclaimable:     178968 kB' 'Slab:             635504 kB' 'SReclaimable:     178968 kB' 'SUnreclaim:       456536 kB' 'KernelStack:       15984 kB' 'PageTables:         7288 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'CommitLimit:    53482744 kB' 'Committed_AS:    7214704 kB' 'VmallocTotal:   34359738367 kB' 'VmallocUsed:      199016 kB' 'VmallocChunk:          0 kB' 'Percpu:            44352 kB' 'HardwareCorrupted:     0 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'CmaTotal:              0 kB' 'CmaFree:               0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:    1024' 'HugePages_Free:     1024' 'HugePages_Rsvd:        0' 'HugePages_Surp:        0' 'Hugepagesize:       2048 kB' 'Hugetlb:         2097152 kB' 'DirectMap4k:      515496 kB' 'DirectMap2M:     8597504 kB' 'DirectMap1G:    93323264 kB'
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.993    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.993    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.994    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.994    10:42:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]]
00:04:36.995    10:42:25	-- setup/common.sh@33 -- # echo 1024
00:04:36.995    10:42:25	-- setup/common.sh@33 -- # return 0
00:04:36.995   10:42:25	-- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv ))
00:04:36.995   10:42:25	-- setup/hugepages.sh@112 -- # get_nodes
00:04:36.995   10:42:25	-- setup/hugepages.sh@27 -- # local node
00:04:36.995   10:42:25	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:36.995   10:42:25	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024
00:04:36.995   10:42:25	-- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9])
00:04:36.995   10:42:25	-- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0
00:04:36.995   10:42:25	-- setup/hugepages.sh@32 -- # no_nodes=2
00:04:36.995   10:42:25	-- setup/hugepages.sh@33 -- # (( no_nodes > 0 ))
00:04:36.995   10:42:25	-- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}"
00:04:36.995   10:42:25	-- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv ))
00:04:36.995    10:42:25	-- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0
00:04:36.995    10:42:25	-- setup/common.sh@17 -- # local get=HugePages_Surp
00:04:36.995    10:42:25	-- setup/common.sh@18 -- # local node=0
00:04:36.995    10:42:25	-- setup/common.sh@19 -- # local var val
00:04:36.995    10:42:25	-- setup/common.sh@20 -- # local mem_f mem
00:04:36.995    10:42:25	-- setup/common.sh@22 -- # mem_f=/proc/meminfo
00:04:36.995    10:42:25	-- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]]
00:04:36.995    10:42:25	-- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo
00:04:36.995    10:42:25	-- setup/common.sh@28 -- # mapfile -t mem
00:04:36.995    10:42:25	-- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }")
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995     10:42:25	-- setup/common.sh@16 -- # printf '%s\n' 'MemTotal:       48064848 kB' 'MemFree:        43825436 kB' 'MemUsed:         4239412 kB' 'SwapCached:            0 kB' 'Active:          1256760 kB' 'Inactive:         171132 kB' 'Active(anon):    1046208 kB' 'Inactive(anon):        0 kB' 'Active(file):     210552 kB' 'Inactive(file):   171132 kB' 'Unevictable:        3072 kB' 'Mlocked:               0 kB' 'Dirty:                 0 kB' 'Writeback:             0 kB' 'FilePages:       1267804 kB' 'Mapped:            88916 kB' 'AnonPages:        163248 kB' 'Shmem:            886120 kB' 'KernelStack:        8520 kB' 'PageTables:         3124 kB' 'SecPageTables:         0 kB' 'NFS_Unstable:          0 kB' 'Bounce:                0 kB' 'WritebackTmp:          0 kB' 'KReclaimable:      80708 kB' 'Slab:             302952 kB' 'SReclaimable:      80708 kB' 'SUnreclaim:       222244 kB' 'AnonHugePages:         0 kB' 'ShmemHugePages:        0 kB' 'ShmemPmdMapped:        0 kB' 'FileHugePages:         0 kB' 'FilePmdMapped:         0 kB' 'Unaccepted:            0 kB' 'HugePages_Total:  1024' 'HugePages_Free:   1024' 'HugePages_Surp:      0'
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.995    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.995    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # continue
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # IFS=': '
00:04:36.996    10:42:25	-- setup/common.sh@31 -- # read -r var val _
00:04:36.996    10:42:25	-- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]]
00:04:36.996    10:42:25	-- setup/common.sh@33 -- # echo 0
00:04:36.996    10:42:25	-- setup/common.sh@33 -- # return 0
00:04:36.996   10:42:25	-- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 ))
00:04:36.996   10:42:25	-- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}"
00:04:36.996   10:42:25	-- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1
00:04:36.996   10:42:25	-- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1
00:04:36.996   10:42:25	-- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024'
00:04:36.996  node0=1024 expecting 1024
00:04:36.996   10:42:25	-- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]]
00:04:36.996  
00:04:36.996  real	0m6.849s
00:04:36.996  user	0m2.620s
00:04:36.996  sys	0m4.408s
00:04:36.996   10:42:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:36.996   10:42:25	-- common/autotest_common.sh@10 -- # set +x
00:04:36.996  ************************************
00:04:36.996  END TEST no_shrink_alloc
00:04:36.996  ************************************
00:04:36.996   10:42:25	-- setup/hugepages.sh@217 -- # clear_hp
00:04:36.996   10:42:25	-- setup/hugepages.sh@37 -- # local node hp
00:04:36.996   10:42:25	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:36.996   10:42:25	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:36.996   10:42:25	-- setup/hugepages.sh@41 -- # echo 0
00:04:36.996   10:42:25	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:36.996   10:42:25	-- setup/hugepages.sh@41 -- # echo 0
00:04:36.996   10:42:25	-- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}"
00:04:36.996   10:42:25	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:36.996   10:42:25	-- setup/hugepages.sh@41 -- # echo 0
00:04:36.996   10:42:25	-- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"*
00:04:36.996   10:42:25	-- setup/hugepages.sh@41 -- # echo 0
00:04:36.996   10:42:25	-- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes
00:04:36.996   10:42:25	-- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes
00:04:36.996  
00:04:36.996  real	0m28.331s
00:04:36.996  user	0m9.699s
00:04:36.996  sys	0m16.233s
00:04:36.996   10:42:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:36.996   10:42:25	-- common/autotest_common.sh@10 -- # set +x
00:04:36.996  ************************************
00:04:36.996  END TEST hugepages
00:04:36.996  ************************************
00:04:37.255   10:42:26	-- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh
00:04:37.255   10:42:26	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:37.255   10:42:26	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:37.255   10:42:26	-- common/autotest_common.sh@10 -- # set +x
00:04:37.255  ************************************
00:04:37.255  START TEST driver
00:04:37.255  ************************************
00:04:37.255   10:42:26	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/driver.sh
00:04:37.255  * Looking for test storage...
00:04:37.255  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:04:37.255     10:42:26	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:37.255      10:42:26	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:37.255      10:42:26	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:37.255     10:42:26	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:37.255     10:42:26	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:37.255     10:42:26	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:37.255     10:42:26	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:37.255     10:42:26	-- scripts/common.sh@335 -- # IFS=.-:
00:04:37.255     10:42:26	-- scripts/common.sh@335 -- # read -ra ver1
00:04:37.255     10:42:26	-- scripts/common.sh@336 -- # IFS=.-:
00:04:37.255     10:42:26	-- scripts/common.sh@336 -- # read -ra ver2
00:04:37.255     10:42:26	-- scripts/common.sh@337 -- # local 'op=<'
00:04:37.255     10:42:26	-- scripts/common.sh@339 -- # ver1_l=2
00:04:37.255     10:42:26	-- scripts/common.sh@340 -- # ver2_l=1
00:04:37.255     10:42:26	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:37.255     10:42:26	-- scripts/common.sh@343 -- # case "$op" in
00:04:37.255     10:42:26	-- scripts/common.sh@344 -- # : 1
00:04:37.255     10:42:26	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:37.255     10:42:26	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:37.255      10:42:26	-- scripts/common.sh@364 -- # decimal 1
00:04:37.255      10:42:26	-- scripts/common.sh@352 -- # local d=1
00:04:37.255      10:42:26	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:37.255      10:42:26	-- scripts/common.sh@354 -- # echo 1
00:04:37.255     10:42:26	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:37.255      10:42:26	-- scripts/common.sh@365 -- # decimal 2
00:04:37.255      10:42:26	-- scripts/common.sh@352 -- # local d=2
00:04:37.255      10:42:26	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:37.255      10:42:26	-- scripts/common.sh@354 -- # echo 2
00:04:37.255     10:42:26	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:37.255     10:42:26	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:37.255     10:42:26	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:37.256     10:42:26	-- scripts/common.sh@367 -- # return 0
00:04:37.256     10:42:26	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:37.256     10:42:26	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:37.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.256  		--rc genhtml_branch_coverage=1
00:04:37.256  		--rc genhtml_function_coverage=1
00:04:37.256  		--rc genhtml_legend=1
00:04:37.256  		--rc geninfo_all_blocks=1
00:04:37.256  		--rc geninfo_unexecuted_blocks=1
00:04:37.256  		
00:04:37.256  		'
00:04:37.256     10:42:26	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:37.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.256  		--rc genhtml_branch_coverage=1
00:04:37.256  		--rc genhtml_function_coverage=1
00:04:37.256  		--rc genhtml_legend=1
00:04:37.256  		--rc geninfo_all_blocks=1
00:04:37.256  		--rc geninfo_unexecuted_blocks=1
00:04:37.256  		
00:04:37.256  		'
00:04:37.256     10:42:26	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:37.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.256  		--rc genhtml_branch_coverage=1
00:04:37.256  		--rc genhtml_function_coverage=1
00:04:37.256  		--rc genhtml_legend=1
00:04:37.256  		--rc geninfo_all_blocks=1
00:04:37.256  		--rc geninfo_unexecuted_blocks=1
00:04:37.256  		
00:04:37.256  		'
00:04:37.256     10:42:26	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:37.256  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.256  		--rc genhtml_branch_coverage=1
00:04:37.256  		--rc genhtml_function_coverage=1
00:04:37.256  		--rc genhtml_legend=1
00:04:37.256  		--rc geninfo_all_blocks=1
00:04:37.256  		--rc geninfo_unexecuted_blocks=1
00:04:37.256  		
00:04:37.256  		'
00:04:37.256   10:42:26	-- setup/driver.sh@68 -- # setup reset
00:04:37.256   10:42:26	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:37.256   10:42:26	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:42.525   10:42:30	-- setup/driver.sh@69 -- # run_test guess_driver guess_driver
00:04:42.525   10:42:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:42.525   10:42:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:42.525   10:42:30	-- common/autotest_common.sh@10 -- # set +x
00:04:42.525  ************************************
00:04:42.525  START TEST guess_driver
00:04:42.525  ************************************
00:04:42.525   10:42:30	-- common/autotest_common.sh@1114 -- # guess_driver
00:04:42.525   10:42:30	-- setup/driver.sh@46 -- # local driver setup_driver marker
00:04:42.525   10:42:30	-- setup/driver.sh@47 -- # local fail=0
00:04:42.525    10:42:30	-- setup/driver.sh@49 -- # pick_driver
00:04:42.525    10:42:30	-- setup/driver.sh@36 -- # vfio
00:04:42.525    10:42:30	-- setup/driver.sh@21 -- # local iommu_grups
00:04:42.525    10:42:30	-- setup/driver.sh@22 -- # local unsafe_vfio
00:04:42.525    10:42:30	-- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]]
00:04:42.525    10:42:30	-- setup/driver.sh@25 -- # unsafe_vfio=N
00:04:42.525    10:42:30	-- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*)
00:04:42.525    10:42:30	-- setup/driver.sh@29 -- # (( 162 > 0 ))
00:04:42.525    10:42:30	-- setup/driver.sh@30 -- # is_driver vfio_pci
00:04:42.525    10:42:30	-- setup/driver.sh@14 -- # mod vfio_pci
00:04:42.525     10:42:30	-- setup/driver.sh@12 -- # dep vfio_pci
00:04:42.525     10:42:30	-- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci
00:04:42.525    10:42:30	-- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 
00:04:42.525  insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz  == *\.\k\o* ]]
00:04:42.525    10:42:30	-- setup/driver.sh@30 -- # return 0
00:04:42.525    10:42:30	-- setup/driver.sh@37 -- # echo vfio-pci
00:04:42.525   10:42:30	-- setup/driver.sh@49 -- # driver=vfio-pci
00:04:42.525   10:42:30	-- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]]
00:04:42.525   10:42:30	-- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci'
00:04:42.525  Looking for driver=vfio-pci
00:04:42.525   10:42:30	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:42.525    10:42:30	-- setup/driver.sh@45 -- # setup output config
00:04:42.525    10:42:30	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:42.525    10:42:30	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:04:45.813   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.813   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.813   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:45.814   10:42:34	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:45.814   10:42:34	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:45.814   10:42:34	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:49.106   10:42:37	-- setup/driver.sh@58 -- # [[ -> == \-\> ]]
00:04:49.106   10:42:37	-- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]]
00:04:49.106   10:42:37	-- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver
00:04:49.106   10:42:37	-- setup/driver.sh@64 -- # (( fail == 0 ))
00:04:49.106   10:42:37	-- setup/driver.sh@65 -- # setup reset
00:04:49.106   10:42:37	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:49.106   10:42:37	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:53.300  
00:04:53.300  real	0m11.344s
00:04:53.300  user	0m2.507s
00:04:53.300  sys	0m5.059s
00:04:53.300   10:42:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:53.300   10:42:42	-- common/autotest_common.sh@10 -- # set +x
00:04:53.300  ************************************
00:04:53.300  END TEST guess_driver
00:04:53.300  ************************************
00:04:53.300  
00:04:53.300  real	0m16.240s
00:04:53.300  user	0m3.898s
00:04:53.300  sys	0m7.806s
00:04:53.300   10:42:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:04:53.300   10:42:42	-- common/autotest_common.sh@10 -- # set +x
00:04:53.300  ************************************
00:04:53.300  END TEST driver
00:04:53.300  ************************************
00:04:53.300   10:42:42	-- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh
00:04:53.300   10:42:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:53.300   10:42:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:53.300   10:42:42	-- common/autotest_common.sh@10 -- # set +x
00:04:53.300  ************************************
00:04:53.300  START TEST devices
00:04:53.300  ************************************
00:04:53.300   10:42:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/devices.sh
00:04:53.559  * Looking for test storage...
00:04:53.559  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup
00:04:53.559     10:42:42	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:04:53.559      10:42:42	-- common/autotest_common.sh@1690 -- # lcov --version
00:04:53.559      10:42:42	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:04:53.559     10:42:42	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:04:53.559     10:42:42	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:04:53.559     10:42:42	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:04:53.559     10:42:42	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:04:53.559     10:42:42	-- scripts/common.sh@335 -- # IFS=.-:
00:04:53.559     10:42:42	-- scripts/common.sh@335 -- # read -ra ver1
00:04:53.559     10:42:42	-- scripts/common.sh@336 -- # IFS=.-:
00:04:53.559     10:42:42	-- scripts/common.sh@336 -- # read -ra ver2
00:04:53.559     10:42:42	-- scripts/common.sh@337 -- # local 'op=<'
00:04:53.559     10:42:42	-- scripts/common.sh@339 -- # ver1_l=2
00:04:53.559     10:42:42	-- scripts/common.sh@340 -- # ver2_l=1
00:04:53.559     10:42:42	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:04:53.559     10:42:42	-- scripts/common.sh@343 -- # case "$op" in
00:04:53.559     10:42:42	-- scripts/common.sh@344 -- # : 1
00:04:53.559     10:42:42	-- scripts/common.sh@363 -- # (( v = 0 ))
00:04:53.559     10:42:42	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:53.559      10:42:42	-- scripts/common.sh@364 -- # decimal 1
00:04:53.559      10:42:42	-- scripts/common.sh@352 -- # local d=1
00:04:53.559      10:42:42	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:53.559      10:42:42	-- scripts/common.sh@354 -- # echo 1
00:04:53.559     10:42:42	-- scripts/common.sh@364 -- # ver1[v]=1
00:04:53.559      10:42:42	-- scripts/common.sh@365 -- # decimal 2
00:04:53.559      10:42:42	-- scripts/common.sh@352 -- # local d=2
00:04:53.559      10:42:42	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:53.559      10:42:42	-- scripts/common.sh@354 -- # echo 2
00:04:53.559     10:42:42	-- scripts/common.sh@365 -- # ver2[v]=2
00:04:53.559     10:42:42	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:04:53.559     10:42:42	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:04:53.559     10:42:42	-- scripts/common.sh@367 -- # return 0
00:04:53.559     10:42:42	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:53.559     10:42:42	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:04:53.559  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:53.559  		--rc genhtml_branch_coverage=1
00:04:53.559  		--rc genhtml_function_coverage=1
00:04:53.559  		--rc genhtml_legend=1
00:04:53.559  		--rc geninfo_all_blocks=1
00:04:53.559  		--rc geninfo_unexecuted_blocks=1
00:04:53.559  		
00:04:53.559  		'
00:04:53.559     10:42:42	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:04:53.559  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:53.559  		--rc genhtml_branch_coverage=1
00:04:53.559  		--rc genhtml_function_coverage=1
00:04:53.559  		--rc genhtml_legend=1
00:04:53.559  		--rc geninfo_all_blocks=1
00:04:53.559  		--rc geninfo_unexecuted_blocks=1
00:04:53.559  		
00:04:53.559  		'
00:04:53.559     10:42:42	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:04:53.559  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:53.559  		--rc genhtml_branch_coverage=1
00:04:53.559  		--rc genhtml_function_coverage=1
00:04:53.559  		--rc genhtml_legend=1
00:04:53.559  		--rc geninfo_all_blocks=1
00:04:53.559  		--rc geninfo_unexecuted_blocks=1
00:04:53.559  		
00:04:53.559  		'
00:04:53.559     10:42:42	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:04:53.559  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:53.559  		--rc genhtml_branch_coverage=1
00:04:53.559  		--rc genhtml_function_coverage=1
00:04:53.559  		--rc genhtml_legend=1
00:04:53.559  		--rc geninfo_all_blocks=1
00:04:53.559  		--rc geninfo_unexecuted_blocks=1
00:04:53.559  		
00:04:53.559  		'
00:04:53.559   10:42:42	-- setup/devices.sh@190 -- # trap cleanup EXIT
00:04:53.559   10:42:42	-- setup/devices.sh@192 -- # setup reset
00:04:53.559   10:42:42	-- setup/common.sh@9 -- # [[ reset == output ]]
00:04:53.559   10:42:42	-- setup/common.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:04:57.753   10:42:46	-- setup/devices.sh@194 -- # get_zoned_devs
00:04:57.753   10:42:46	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:04:57.753   10:42:46	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:04:57.753   10:42:46	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:04:57.753   10:42:46	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:04:57.753   10:42:46	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:04:57.753   10:42:46	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:04:57.753   10:42:46	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:04:57.753   10:42:46	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:04:57.754   10:42:46	-- setup/devices.sh@196 -- # blocks=()
00:04:57.754   10:42:46	-- setup/devices.sh@196 -- # declare -a blocks
00:04:57.754   10:42:46	-- setup/devices.sh@197 -- # blocks_to_pci=()
00:04:57.754   10:42:46	-- setup/devices.sh@197 -- # declare -A blocks_to_pci
00:04:57.754   10:42:46	-- setup/devices.sh@198 -- # min_disk_size=3221225472
00:04:57.754   10:42:46	-- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*)
00:04:57.754   10:42:46	-- setup/devices.sh@201 -- # ctrl=nvme0n1
00:04:57.754   10:42:46	-- setup/devices.sh@201 -- # ctrl=nvme0
00:04:57.754   10:42:46	-- setup/devices.sh@202 -- # pci=0000:5e:00.0
00:04:57.754   10:42:46	-- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]]
00:04:57.754   10:42:46	-- setup/devices.sh@204 -- # block_in_use nvme0n1
00:04:57.754   10:42:46	-- scripts/common.sh@380 -- # local block=nvme0n1 pt
00:04:57.754   10:42:46	-- scripts/common.sh@389 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:04:57.754  No valid GPT data, bailing
00:04:57.754    10:42:46	-- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:04:57.754   10:42:46	-- scripts/common.sh@393 -- # pt=
00:04:57.754   10:42:46	-- scripts/common.sh@394 -- # return 1
00:04:57.754    10:42:46	-- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1
00:04:57.754    10:42:46	-- setup/common.sh@76 -- # local dev=nvme0n1
00:04:57.754    10:42:46	-- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:04:57.754    10:42:46	-- setup/common.sh@80 -- # echo 4000787030016
00:04:57.754   10:42:46	-- setup/devices.sh@204 -- # (( 4000787030016 >= min_disk_size ))
00:04:57.754   10:42:46	-- setup/devices.sh@205 -- # blocks+=("${block##*/}")
00:04:57.754   10:42:46	-- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0
00:04:57.754   10:42:46	-- setup/devices.sh@209 -- # (( 1 > 0 ))
00:04:57.754   10:42:46	-- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1
00:04:57.754   10:42:46	-- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount
00:04:57.754   10:42:46	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:04:57.754   10:42:46	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:04:57.754   10:42:46	-- common/autotest_common.sh@10 -- # set +x
00:04:57.754  ************************************
00:04:57.754  START TEST nvme_mount
00:04:57.754  ************************************
00:04:57.754   10:42:46	-- common/autotest_common.sh@1114 -- # nvme_mount
00:04:57.754   10:42:46	-- setup/devices.sh@95 -- # nvme_disk=nvme0n1
00:04:57.754   10:42:46	-- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1
00:04:57.754   10:42:46	-- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:04:57.754   10:42:46	-- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:04:57.754   10:42:46	-- setup/devices.sh@101 -- # partition_drive nvme0n1 1
00:04:57.754   10:42:46	-- setup/common.sh@39 -- # local disk=nvme0n1
00:04:57.754   10:42:46	-- setup/common.sh@40 -- # local part_no=1
00:04:57.754   10:42:46	-- setup/common.sh@41 -- # local size=1073741824
00:04:57.754   10:42:46	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:04:57.754   10:42:46	-- setup/common.sh@44 -- # parts=()
00:04:57.754   10:42:46	-- setup/common.sh@44 -- # local parts
00:04:57.754   10:42:46	-- setup/common.sh@46 -- # (( part = 1 ))
00:04:57.754   10:42:46	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:57.754   10:42:46	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:04:57.754   10:42:46	-- setup/common.sh@46 -- # (( part++ ))
00:04:57.754   10:42:46	-- setup/common.sh@46 -- # (( part <= part_no ))
00:04:57.754   10:42:46	-- setup/common.sh@51 -- # (( size /= 512 ))
00:04:57.754   10:42:46	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:04:57.754   10:42:46	-- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1
00:04:58.691  Creating new GPT entries in memory.
00:04:58.691  GPT data structures destroyed! You may now partition the disk using fdisk or
00:04:58.691  other utilities.
00:04:58.691   10:42:47	-- setup/common.sh@57 -- # (( part = 1 ))
00:04:58.691   10:42:47	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:58.691   10:42:47	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:04:58.691   10:42:47	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:04:58.691   10:42:47	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199
00:04:59.628  Creating new GPT entries in memory.
00:04:59.628  The operation has completed successfully.
00:04:59.628   10:42:48	-- setup/common.sh@57 -- # (( part++ ))
00:04:59.628   10:42:48	-- setup/common.sh@57 -- # (( part <= part_no ))
00:04:59.628   10:42:48	-- setup/common.sh@62 -- # wait 2064256
00:04:59.628   10:42:48	-- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:04:59.628   10:42:48	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size=
00:04:59.628   10:42:48	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:04:59.629   10:42:48	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]]
00:04:59.629   10:42:48	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1
00:04:59.629   10:42:48	-- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:04:59.629   10:42:48	-- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:04:59.629   10:42:48	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:04:59.629   10:42:48	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1
00:04:59.629   10:42:48	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:04:59.629   10:42:48	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:04:59.629   10:42:48	-- setup/devices.sh@53 -- # local found=0
00:04:59.629   10:42:48	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:04:59.629   10:42:48	-- setup/devices.sh@56 -- # :
00:04:59.629   10:42:48	-- setup/devices.sh@59 -- # local pci status
00:04:59.629   10:42:48	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:04:59.629    10:42:48	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:04:59.629    10:42:48	-- setup/devices.sh@47 -- # setup output config
00:04:59.629    10:42:48	-- setup/common.sh@9 -- # [[ output == output ]]
00:04:59.629    10:42:48	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]]
00:05:02.917   10:42:51	-- setup/devices.sh@63 -- # found=1
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:02.917   10:42:51	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:02.917   10:42:51	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]]
00:05:02.917   10:42:51	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:02.917   10:42:51	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:02.917   10:42:51	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:05:02.917   10:42:51	-- setup/devices.sh@110 -- # cleanup_nvme
00:05:02.917   10:42:51	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:02.917   10:42:51	-- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:02.917   10:42:51	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:02.917   10:42:51	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:05:03.178  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:03.178   10:42:51	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:03.178   10:42:51	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:03.437  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:05:03.437  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:05:03.437  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:05:03.437  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:05:03.437   10:42:52	-- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount 1024M
00:05:03.437   10:42:52	-- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount size=1024M
00:05:03.437   10:42:52	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:03.437   10:42:52	-- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]]
00:05:03.437   10:42:52	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M
00:05:03.437   10:42:52	-- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:03.437   10:42:52	-- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:05:03.437   10:42:52	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:05:03.437   10:42:52	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1
00:05:03.437   10:42:52	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:03.437   10:42:52	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:05:03.437   10:42:52	-- setup/devices.sh@53 -- # local found=0
00:05:03.437   10:42:52	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:03.437   10:42:52	-- setup/devices.sh@56 -- # :
00:05:03.437   10:42:52	-- setup/devices.sh@59 -- # local pci status
00:05:03.437   10:42:52	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:03.437    10:42:52	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:05:03.437    10:42:52	-- setup/devices.sh@47 -- # setup output config
00:05:03.437    10:42:52	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:03.437    10:42:52	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:05.970   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.970   10:42:54	-- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]]
00:05:05.970   10:42:54	-- setup/devices.sh@63 -- # found=1
00:05:05.970   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.970   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.970   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971   10:42:54	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:05.971   10:42:54	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount ]]
00:05:05.971   10:42:54	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:05.971   10:42:54	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]]
00:05:05.971   10:42:54	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount/test_nvme
00:05:05.971   10:42:54	-- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:05.971   10:42:54	-- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' ''
00:05:05.971   10:42:54	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:05:05.971   10:42:54	-- setup/devices.sh@49 -- # local mounts=data@nvme0n1
00:05:05.971   10:42:54	-- setup/devices.sh@50 -- # local mount_point=
00:05:05.971   10:42:54	-- setup/devices.sh@51 -- # local test_file=
00:05:05.971   10:42:54	-- setup/devices.sh@53 -- # local found=0
00:05:05.971   10:42:54	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:05:05.971   10:42:54	-- setup/devices.sh@59 -- # local pci status
00:05:05.971   10:42:54	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:05.971    10:42:54	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:05:05.971    10:42:54	-- setup/devices.sh@47 -- # setup output config
00:05:05.971    10:42:54	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:05.971    10:42:54	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]]
00:05:09.261   10:42:58	-- setup/devices.sh@63 -- # found=1
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.261   10:42:58	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:09.261   10:42:58	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:09.521   10:42:58	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:09.521   10:42:58	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:05:09.521   10:42:58	-- setup/devices.sh@68 -- # return 0
00:05:09.521   10:42:58	-- setup/devices.sh@128 -- # cleanup_nvme
00:05:09.521   10:42:58	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:09.521   10:42:58	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:09.521   10:42:58	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:09.521   10:42:58	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:09.521  /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:09.521  
00:05:09.521  real	0m11.913s
00:05:09.521  user	0m3.211s
00:05:09.521  sys	0m6.409s
00:05:09.521   10:42:58	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:09.521   10:42:58	-- common/autotest_common.sh@10 -- # set +x
00:05:09.521  ************************************
00:05:09.521  END TEST nvme_mount
00:05:09.521  ************************************
00:05:09.521   10:42:58	-- setup/devices.sh@214 -- # run_test dm_mount dm_mount
00:05:09.521   10:42:58	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:09.521   10:42:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:09.521   10:42:58	-- common/autotest_common.sh@10 -- # set +x
00:05:09.521  ************************************
00:05:09.521  START TEST dm_mount
00:05:09.521  ************************************
00:05:09.521   10:42:58	-- common/autotest_common.sh@1114 -- # dm_mount
00:05:09.521   10:42:58	-- setup/devices.sh@144 -- # pv=nvme0n1
00:05:09.521   10:42:58	-- setup/devices.sh@145 -- # pv0=nvme0n1p1
00:05:09.521   10:42:58	-- setup/devices.sh@146 -- # pv1=nvme0n1p2
00:05:09.521   10:42:58	-- setup/devices.sh@148 -- # partition_drive nvme0n1
00:05:09.521   10:42:58	-- setup/common.sh@39 -- # local disk=nvme0n1
00:05:09.521   10:42:58	-- setup/common.sh@40 -- # local part_no=2
00:05:09.521   10:42:58	-- setup/common.sh@41 -- # local size=1073741824
00:05:09.521   10:42:58	-- setup/common.sh@43 -- # local part part_start=0 part_end=0
00:05:09.521   10:42:58	-- setup/common.sh@44 -- # parts=()
00:05:09.521   10:42:58	-- setup/common.sh@44 -- # local parts
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part = 1 ))
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:09.521   10:42:58	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part++ ))
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:09.521   10:42:58	-- setup/common.sh@47 -- # parts+=("${disk}p$part")
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part++ ))
00:05:09.521   10:42:58	-- setup/common.sh@46 -- # (( part <= part_no ))
00:05:09.521   10:42:58	-- setup/common.sh@51 -- # (( size /= 512 ))
00:05:09.521   10:42:58	-- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all
00:05:09.521   10:42:58	-- setup/common.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2
00:05:10.460  Creating new GPT entries in memory.
00:05:10.460  GPT data structures destroyed! You may now partition the disk using fdisk or
00:05:10.460  other utilities.
00:05:10.460   10:42:59	-- setup/common.sh@57 -- # (( part = 1 ))
00:05:10.460   10:42:59	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:10.460   10:42:59	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:05:10.460   10:42:59	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:05:10.460   10:42:59	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199
00:05:11.840  Creating new GPT entries in memory.
00:05:11.840  The operation has completed successfully.
00:05:11.840   10:43:00	-- setup/common.sh@57 -- # (( part++ ))
00:05:11.840   10:43:00	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:11.840   10:43:00	-- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 ))
00:05:11.840   10:43:00	-- setup/common.sh@59 -- # (( part_end = part_start + size - 1 ))
00:05:11.840   10:43:00	-- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351
00:05:12.778  The operation has completed successfully.
00:05:12.778   10:43:01	-- setup/common.sh@57 -- # (( part++ ))
00:05:12.778   10:43:01	-- setup/common.sh@57 -- # (( part <= part_no ))
00:05:12.778   10:43:01	-- setup/common.sh@62 -- # wait 2068080
00:05:12.778   10:43:01	-- setup/devices.sh@150 -- # dm_name=nvme_dm_test
00:05:12.778   10:43:01	-- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:12.778   10:43:01	-- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:05:12.778   10:43:01	-- setup/devices.sh@155 -- # dmsetup create nvme_dm_test
00:05:12.778   10:43:01	-- setup/devices.sh@160 -- # for t in {1..5}
00:05:12.778   10:43:01	-- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:12.778   10:43:01	-- setup/devices.sh@161 -- # break
00:05:12.778   10:43:01	-- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:12.778    10:43:01	-- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test
00:05:12.778   10:43:01	-- setup/devices.sh@165 -- # dm=/dev/dm-0
00:05:12.778   10:43:01	-- setup/devices.sh@166 -- # dm=dm-0
00:05:12.778   10:43:01	-- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]]
00:05:12.778   10:43:01	-- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]]
00:05:12.778   10:43:01	-- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:12.778   10:43:01	-- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount size=
00:05:12.778   10:43:01	-- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:12.778   10:43:01	-- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]]
00:05:12.778   10:43:01	-- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test
00:05:12.778   10:43:01	-- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:12.778   10:43:01	-- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:05:12.778   10:43:01	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:05:12.778   10:43:01	-- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test
00:05:12.778   10:43:01	-- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:12.778   10:43:01	-- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:05:12.778   10:43:01	-- setup/devices.sh@53 -- # local found=0
00:05:12.778   10:43:01	-- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]]
00:05:12.778   10:43:01	-- setup/devices.sh@56 -- # :
00:05:12.778   10:43:01	-- setup/devices.sh@59 -- # local pci status
00:05:12.778   10:43:01	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:12.778    10:43:01	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:05:12.778    10:43:01	-- setup/devices.sh@47 -- # setup output config
00:05:12.778    10:43:01	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:12.778    10:43:01	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]]
00:05:16.070   10:43:04	-- setup/devices.sh@63 -- # found=1
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070   10:43:04	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:16.070   10:43:04	-- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount ]]
00:05:16.070   10:43:04	-- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:16.070   10:43:04	-- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm ]]
00:05:16.070   10:43:04	-- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount/test_dm
00:05:16.070   10:43:04	-- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:16.070   10:43:04	-- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' ''
00:05:16.070   10:43:04	-- setup/devices.sh@48 -- # local dev=0000:5e:00.0
00:05:16.070   10:43:04	-- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0
00:05:16.070   10:43:04	-- setup/devices.sh@50 -- # local mount_point=
00:05:16.070   10:43:04	-- setup/devices.sh@51 -- # local test_file=
00:05:16.070   10:43:04	-- setup/devices.sh@53 -- # local found=0
00:05:16.070   10:43:04	-- setup/devices.sh@55 -- # [[ -n '' ]]
00:05:16.070   10:43:04	-- setup/devices.sh@59 -- # local pci status
00:05:16.070   10:43:04	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:16.070    10:43:04	-- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0
00:05:16.070    10:43:04	-- setup/devices.sh@47 -- # setup output config
00:05:16.070    10:43:04	-- setup/common.sh@9 -- # [[ output == output ]]
00:05:16.070    10:43:04	-- setup/common.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh config
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]]
00:05:19.361   10:43:07	-- setup/devices.sh@63 -- # found=1
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:07	-- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:05:19.361   10:43:07	-- setup/devices.sh@60 -- # read -r pci _ _ status
00:05:19.361   10:43:08	-- setup/devices.sh@66 -- # (( found == 1 ))
00:05:19.361   10:43:08	-- setup/devices.sh@68 -- # [[ -n '' ]]
00:05:19.361   10:43:08	-- setup/devices.sh@68 -- # return 0
00:05:19.361   10:43:08	-- setup/devices.sh@187 -- # cleanup_dm
00:05:19.361   10:43:08	-- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:19.361   10:43:08	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:05:19.361   10:43:08	-- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test
00:05:19.361   10:43:08	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:19.361   10:43:08	-- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1
00:05:19.361  /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
00:05:19.361   10:43:08	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:05:19.361   10:43:08	-- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2
00:05:19.361  
00:05:19.361  real	0m9.796s
00:05:19.361  user	0m2.425s
00:05:19.361  sys	0m4.445s
00:05:19.361   10:43:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:19.361   10:43:08	-- common/autotest_common.sh@10 -- # set +x
00:05:19.361  ************************************
00:05:19.361  END TEST dm_mount
00:05:19.361  ************************************
00:05:19.361   10:43:08	-- setup/devices.sh@1 -- # cleanup
00:05:19.361   10:43:08	-- setup/devices.sh@11 -- # cleanup_nvme
00:05:19.361   10:43:08	-- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/nvme_mount
00:05:19.361   10:43:08	-- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:19.361   10:43:08	-- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1
00:05:19.361   10:43:08	-- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]]
00:05:19.361   10:43:08	-- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1
00:05:19.620  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:05:19.620  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:05:19.620  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:05:19.620  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:05:19.620   10:43:08	-- setup/devices.sh@12 -- # cleanup_dm
00:05:19.620   10:43:08	-- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvme-phy-autotest/spdk/test/setup/dm_mount
00:05:19.620   10:43:08	-- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]]
00:05:19.620   10:43:08	-- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]]
00:05:19.620   10:43:08	-- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]]
00:05:19.620   10:43:08	-- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]]
00:05:19.620   10:43:08	-- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1
00:05:19.620  
00:05:19.620  real	0m26.221s
00:05:19.620  user	0m7.279s
00:05:19.620  sys	0m13.655s
00:05:19.620   10:43:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:19.620   10:43:08	-- common/autotest_common.sh@10 -- # set +x
00:05:19.620  ************************************
00:05:19.620  END TEST devices
00:05:19.620  ************************************
00:05:19.620  
00:05:19.620  real	1m37.194s
00:05:19.620  user	0m28.811s
00:05:19.620  sys	0m52.736s
00:05:19.620   10:43:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:19.620   10:43:08	-- common/autotest_common.sh@10 -- # set +x
00:05:19.620  ************************************
00:05:19.620  END TEST setup.sh
00:05:19.620  ************************************
00:05:19.620   10:43:08	-- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh status
00:05:22.912  Hugepages
00:05:22.912  node     hugesize     free /  total
00:05:22.912  node0   1048576kB        0 /      0
00:05:22.912  node0      2048kB     2048 /   2048
00:05:22.912  node1   1048576kB        0 /      0
00:05:22.912  node1      2048kB        0 /      0
00:05:22.912  
00:05:22.912  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:05:22.912  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:05:22.912  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:05:22.912  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:05:22.912  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:05:22.912  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:05:22.912    10:43:11	-- spdk/autotest.sh@128 -- # uname -s
00:05:22.912   10:43:11	-- spdk/autotest.sh@128 -- # [[ Linux == Linux ]]
00:05:22.912   10:43:11	-- spdk/autotest.sh@130 -- # nvme_namespace_revert
00:05:22.912   10:43:11	-- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:26.202  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:26.202  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:26.203  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:26.203  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:26.203  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:26.460  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:26.460  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:26.460  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:26.460  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:29.752  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:05:29.753   10:43:18	-- common/autotest_common.sh@1527 -- # sleep 1
00:05:30.691   10:43:19	-- common/autotest_common.sh@1528 -- # bdfs=()
00:05:30.691   10:43:19	-- common/autotest_common.sh@1528 -- # local bdfs
00:05:30.691   10:43:19	-- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs))
00:05:30.691    10:43:19	-- common/autotest_common.sh@1529 -- # get_nvme_bdfs
00:05:30.691    10:43:19	-- common/autotest_common.sh@1508 -- # bdfs=()
00:05:30.691    10:43:19	-- common/autotest_common.sh@1508 -- # local bdfs
00:05:30.691    10:43:19	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:30.691     10:43:19	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:05:30.691     10:43:19	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:05:30.691    10:43:19	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:05:30.691    10:43:19	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:05:30.691   10:43:19	-- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:05:34.121  Waiting for block devices as requested
00:05:34.121  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:05:34.121  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:05:34.121  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:05:34.121  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:05:34.380  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:05:34.381  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:05:34.381  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:05:34.641  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:05:34.641  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:05:34.641  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:05:34.641  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:05:34.900  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:05:34.900  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:05:34.900  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:05:35.160  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:05:35.160  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:05:35.160  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:05:35.420   10:43:24	-- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}"
00:05:35.420    10:43:24	-- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:05:35.420     10:43:24	-- common/autotest_common.sh@1497 -- # grep 0000:5e:00.0/nvme/nvme
00:05:35.420     10:43:24	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:05:35.420    10:43:24	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:05:35.420    10:43:24	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:05:35.420     10:43:24	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:05:35.420    10:43:24	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:05:35.420   10:43:24	-- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0
00:05:35.420   10:43:24	-- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]]
00:05:35.420    10:43:24	-- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:05:35.420    10:43:24	-- common/autotest_common.sh@1540 -- # grep oacs
00:05:35.420    10:43:24	-- common/autotest_common.sh@1540 -- # cut -d: -f2
00:05:35.420   10:43:24	-- common/autotest_common.sh@1540 -- # oacs=' 0xe'
00:05:35.420   10:43:24	-- common/autotest_common.sh@1541 -- # oacs_ns_manage=8
00:05:35.420   10:43:24	-- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]]
00:05:35.420    10:43:24	-- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0
00:05:35.420    10:43:24	-- common/autotest_common.sh@1549 -- # grep unvmcap
00:05:35.420    10:43:24	-- common/autotest_common.sh@1549 -- # cut -d: -f2
00:05:35.420   10:43:24	-- common/autotest_common.sh@1549 -- # unvmcap=' 0'
00:05:35.420   10:43:24	-- common/autotest_common.sh@1550 -- # [[  0 -eq 0 ]]
00:05:35.420   10:43:24	-- common/autotest_common.sh@1552 -- # continue
00:05:35.420   10:43:24	-- spdk/autotest.sh@133 -- # timing_exit pre_cleanup
00:05:35.420   10:43:24	-- common/autotest_common.sh@728 -- # xtrace_disable
00:05:35.420   10:43:24	-- common/autotest_common.sh@10 -- # set +x
00:05:35.420   10:43:24	-- spdk/autotest.sh@136 -- # timing_enter afterboot
00:05:35.420   10:43:24	-- common/autotest_common.sh@722 -- # xtrace_disable
00:05:35.420   10:43:24	-- common/autotest_common.sh@10 -- # set +x
00:05:35.420   10:43:24	-- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:05:38.717  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:05:38.717  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:05:38.979  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:05:38.979  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:05:38.979  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:05:38.979  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:05:42.274  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:05:42.274   10:43:30	-- spdk/autotest.sh@138 -- # timing_exit afterboot
00:05:42.274   10:43:30	-- common/autotest_common.sh@728 -- # xtrace_disable
00:05:42.274   10:43:30	-- common/autotest_common.sh@10 -- # set +x
00:05:42.274   10:43:31	-- spdk/autotest.sh@142 -- # opal_revert_cleanup
00:05:42.274   10:43:31	-- common/autotest_common.sh@1586 -- # mapfile -t bdfs
00:05:42.274    10:43:31	-- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54
00:05:42.274    10:43:31	-- common/autotest_common.sh@1572 -- # bdfs=()
00:05:42.274    10:43:31	-- common/autotest_common.sh@1572 -- # local bdfs
00:05:42.274     10:43:31	-- common/autotest_common.sh@1574 -- # get_nvme_bdfs
00:05:42.274     10:43:31	-- common/autotest_common.sh@1508 -- # bdfs=()
00:05:42.274     10:43:31	-- common/autotest_common.sh@1508 -- # local bdfs
00:05:42.274     10:43:31	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:05:42.274      10:43:31	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:05:42.274      10:43:31	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:05:42.274     10:43:31	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:05:42.274     10:43:31	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:05:42.274    10:43:31	-- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs)
00:05:42.274     10:43:31	-- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device
00:05:42.274    10:43:31	-- common/autotest_common.sh@1575 -- # device=0x0a54
00:05:42.274    10:43:31	-- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:05:42.274    10:43:31	-- common/autotest_common.sh@1577 -- # bdfs+=($bdf)
00:05:42.274    10:43:31	-- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:5e:00.0
00:05:42.274   10:43:31	-- common/autotest_common.sh@1587 -- # [[ -z 0000:5e:00.0 ]]
00:05:42.274   10:43:31	-- common/autotest_common.sh@1592 -- # spdk_tgt_pid=2076372
00:05:42.274   10:43:31	-- common/autotest_common.sh@1593 -- # waitforlisten 2076372
00:05:42.274   10:43:31	-- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:05:42.274   10:43:31	-- common/autotest_common.sh@829 -- # '[' -z 2076372 ']'
00:05:42.274   10:43:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:42.274   10:43:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:05:42.274   10:43:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:42.274  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:42.274   10:43:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:05:42.274   10:43:31	-- common/autotest_common.sh@10 -- # set +x
00:05:42.274  [2024-12-15 10:43:31.191172] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:05:42.274  [2024-12-15 10:43:31.191236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076372 ]
00:05:42.274  EAL: No free 2048 kB hugepages reported on node 1
00:05:42.274  [2024-12-15 10:43:31.288047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:42.533  [2024-12-15 10:43:31.389873] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:05:42.533  [2024-12-15 10:43:31.390042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:05:42.793  [2024-12-15 10:43:31.578052] 'OCF_Core' volume operations registered
00:05:42.793  [2024-12-15 10:43:31.581526] 'OCF_Cache' volume operations registered
00:05:42.793  [2024-12-15 10:43:31.585537] 'OCF Composite' volume operations registered
00:05:42.793  [2024-12-15 10:43:31.589042] 'SPDK_block_device' volume operations registered
00:05:43.407   10:43:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:05:43.407   10:43:32	-- common/autotest_common.sh@862 -- # return 0
00:05:43.407   10:43:32	-- common/autotest_common.sh@1595 -- # bdf_id=0
00:05:43.407   10:43:32	-- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}"
00:05:43.407   10:43:32	-- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0
00:05:46.698  nvme0n1
00:05:46.698   10:43:35	-- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:05:46.698  [2024-12-15 10:43:35.362356] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal
00:05:46.698  request:
00:05:46.698  {
00:05:46.698    "nvme_ctrlr_name": "nvme0",
00:05:46.698    "password": "test",
00:05:46.698    "method": "bdev_nvme_opal_revert",
00:05:46.698    "req_id": 1
00:05:46.698  }
00:05:46.698  Got JSON-RPC error response
00:05:46.698  response:
00:05:46.698  {
00:05:46.698    "code": -32602,
00:05:46.698    "message": "Invalid parameters"
00:05:46.698  }
00:05:46.698   10:43:35	-- common/autotest_common.sh@1599 -- # true
00:05:46.698   10:43:35	-- common/autotest_common.sh@1600 -- # (( ++bdf_id ))
00:05:46.698   10:43:35	-- common/autotest_common.sh@1603 -- # killprocess 2076372
00:05:46.698   10:43:35	-- common/autotest_common.sh@936 -- # '[' -z 2076372 ']'
00:05:46.698   10:43:35	-- common/autotest_common.sh@940 -- # kill -0 2076372
00:05:46.698    10:43:35	-- common/autotest_common.sh@941 -- # uname
00:05:46.698   10:43:35	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:05:46.698    10:43:35	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2076372
00:05:46.698   10:43:35	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:05:46.698   10:43:35	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:05:46.698   10:43:35	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2076372'
00:05:46.698  killing process with pid 2076372
00:05:46.698   10:43:35	-- common/autotest_common.sh@955 -- # kill 2076372
00:05:46.698   10:43:35	-- common/autotest_common.sh@960 -- # wait 2076372
00:05:50.890   10:43:39	-- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']'
00:05:50.890   10:43:39	-- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']'
00:05:50.890   10:43:39	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:05:50.890   10:43:39	-- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]]
00:05:50.890   10:43:39	-- spdk/autotest.sh@160 -- # timing_enter lib
00:05:50.890   10:43:39	-- common/autotest_common.sh@722 -- # xtrace_disable
00:05:50.891   10:43:39	-- common/autotest_common.sh@10 -- # set +x
00:05:50.891   10:43:39	-- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh
00:05:50.891   10:43:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:50.891   10:43:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:50.891   10:43:39	-- common/autotest_common.sh@10 -- # set +x
00:05:50.891  ************************************
00:05:50.891  START TEST env
00:05:50.891  ************************************
00:05:50.891   10:43:39	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env.sh
00:05:50.891  * Looking for test storage...
00:05:50.891  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env
00:05:50.891    10:43:39	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:50.891     10:43:39	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:50.891     10:43:39	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:05:50.891    10:43:39	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:05:50.891    10:43:39	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:05:50.891    10:43:39	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:05:50.891    10:43:39	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:05:50.891    10:43:39	-- scripts/common.sh@335 -- # IFS=.-:
00:05:50.891    10:43:39	-- scripts/common.sh@335 -- # read -ra ver1
00:05:50.891    10:43:39	-- scripts/common.sh@336 -- # IFS=.-:
00:05:50.891    10:43:39	-- scripts/common.sh@336 -- # read -ra ver2
00:05:50.891    10:43:39	-- scripts/common.sh@337 -- # local 'op=<'
00:05:50.891    10:43:39	-- scripts/common.sh@339 -- # ver1_l=2
00:05:50.891    10:43:39	-- scripts/common.sh@340 -- # ver2_l=1
00:05:50.891    10:43:39	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:05:50.891    10:43:39	-- scripts/common.sh@343 -- # case "$op" in
00:05:50.891    10:43:39	-- scripts/common.sh@344 -- # : 1
00:05:50.891    10:43:39	-- scripts/common.sh@363 -- # (( v = 0 ))
00:05:50.891    10:43:39	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:50.891     10:43:39	-- scripts/common.sh@364 -- # decimal 1
00:05:50.891     10:43:39	-- scripts/common.sh@352 -- # local d=1
00:05:50.891     10:43:39	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:50.891     10:43:39	-- scripts/common.sh@354 -- # echo 1
00:05:50.891    10:43:39	-- scripts/common.sh@364 -- # ver1[v]=1
00:05:50.891     10:43:39	-- scripts/common.sh@365 -- # decimal 2
00:05:50.891     10:43:39	-- scripts/common.sh@352 -- # local d=2
00:05:51.150     10:43:39	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:51.150     10:43:39	-- scripts/common.sh@354 -- # echo 2
00:05:51.150    10:43:39	-- scripts/common.sh@365 -- # ver2[v]=2
00:05:51.150    10:43:39	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:05:51.150    10:43:39	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:05:51.150    10:43:39	-- scripts/common.sh@367 -- # return 0
00:05:51.150    10:43:39	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:51.150    10:43:39	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:05:51.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:51.150  		--rc genhtml_branch_coverage=1
00:05:51.150  		--rc genhtml_function_coverage=1
00:05:51.150  		--rc genhtml_legend=1
00:05:51.150  		--rc geninfo_all_blocks=1
00:05:51.150  		--rc geninfo_unexecuted_blocks=1
00:05:51.150  		
00:05:51.150  		'
00:05:51.150    10:43:39	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:05:51.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:51.150  		--rc genhtml_branch_coverage=1
00:05:51.150  		--rc genhtml_function_coverage=1
00:05:51.150  		--rc genhtml_legend=1
00:05:51.150  		--rc geninfo_all_blocks=1
00:05:51.150  		--rc geninfo_unexecuted_blocks=1
00:05:51.150  		
00:05:51.150  		'
00:05:51.150    10:43:39	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:05:51.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:51.150  		--rc genhtml_branch_coverage=1
00:05:51.150  		--rc genhtml_function_coverage=1
00:05:51.150  		--rc genhtml_legend=1
00:05:51.150  		--rc geninfo_all_blocks=1
00:05:51.150  		--rc geninfo_unexecuted_blocks=1
00:05:51.150  		
00:05:51.150  		'
00:05:51.150    10:43:39	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:05:51.150  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:51.150  		--rc genhtml_branch_coverage=1
00:05:51.150  		--rc genhtml_function_coverage=1
00:05:51.150  		--rc genhtml_legend=1
00:05:51.150  		--rc geninfo_all_blocks=1
00:05:51.150  		--rc geninfo_unexecuted_blocks=1
00:05:51.150  		
00:05:51.150  		'
00:05:51.150   10:43:39	-- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut
00:05:51.150   10:43:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:51.150   10:43:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:51.150   10:43:39	-- common/autotest_common.sh@10 -- # set +x
00:05:51.150  ************************************
00:05:51.150  START TEST env_memory
00:05:51.150  ************************************
00:05:51.150   10:43:39	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/memory/memory_ut
00:05:51.150  
00:05:51.150  
00:05:51.150       CUnit - A unit testing framework for C - Version 2.1-3
00:05:51.150       http://cunit.sourceforge.net/
00:05:51.150  
00:05:51.150  
00:05:51.150  Suite: memory
00:05:51.150    Test: alloc and free memory map ...[2024-12-15 10:43:39.966130] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:05:51.150  passed
00:05:51.150    Test: mem map translation ...[2024-12-15 10:43:39.995335] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:05:51.150  [2024-12-15 10:43:39.995358] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:05:51.150  [2024-12-15 10:43:39.995416] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:05:51.150  [2024-12-15 10:43:39.995429] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:05:51.150  passed
00:05:51.150    Test: mem map registration ...[2024-12-15 10:43:40.053165] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234
00:05:51.150  [2024-12-15 10:43:40.053190] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152
00:05:51.150  passed
00:05:51.150    Test: mem map adjacent registrations ...passed
00:05:51.150  
00:05:51.150  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:51.150                suites      1      1    n/a      0        0
00:05:51.150                 tests      4      4      4      0        0
00:05:51.150               asserts    152    152    152      0      n/a
00:05:51.150  
00:05:51.150  Elapsed time =    0.200 seconds
00:05:51.150  
00:05:51.150  real	0m0.214s
00:05:51.150  user	0m0.202s
00:05:51.150  sys	0m0.011s
00:05:51.150   10:43:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:51.150   10:43:40	-- common/autotest_common.sh@10 -- # set +x
00:05:51.150  ************************************
00:05:51.150  END TEST env_memory
00:05:51.150  ************************************
00:05:51.411   10:43:40	-- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys
00:05:51.411   10:43:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:51.411   10:43:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:51.411   10:43:40	-- common/autotest_common.sh@10 -- # set +x
00:05:51.411  ************************************
00:05:51.411  START TEST env_vtophys
00:05:51.411  ************************************
00:05:51.411   10:43:40	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/vtophys/vtophys
00:05:51.411  EAL: lib.eal log level changed from notice to debug
00:05:51.411  EAL: Detected lcore 0 as core 0 on socket 0
00:05:51.411  EAL: Detected lcore 1 as core 1 on socket 0
00:05:51.411  EAL: Detected lcore 2 as core 2 on socket 0
00:05:51.411  EAL: Detected lcore 3 as core 3 on socket 0
00:05:51.411  EAL: Detected lcore 4 as core 4 on socket 0
00:05:51.411  EAL: Detected lcore 5 as core 8 on socket 0
00:05:51.411  EAL: Detected lcore 6 as core 9 on socket 0
00:05:51.411  EAL: Detected lcore 7 as core 10 on socket 0
00:05:51.411  EAL: Detected lcore 8 as core 11 on socket 0
00:05:51.411  EAL: Detected lcore 9 as core 16 on socket 0
00:05:51.411  EAL: Detected lcore 10 as core 17 on socket 0
00:05:51.411  EAL: Detected lcore 11 as core 18 on socket 0
00:05:51.411  EAL: Detected lcore 12 as core 19 on socket 0
00:05:51.411  EAL: Detected lcore 13 as core 20 on socket 0
00:05:51.411  EAL: Detected lcore 14 as core 24 on socket 0
00:05:51.411  EAL: Detected lcore 15 as core 25 on socket 0
00:05:51.411  EAL: Detected lcore 16 as core 26 on socket 0
00:05:51.411  EAL: Detected lcore 17 as core 27 on socket 0
00:05:51.411  EAL: Detected lcore 18 as core 0 on socket 1
00:05:51.411  EAL: Detected lcore 19 as core 1 on socket 1
00:05:51.411  EAL: Detected lcore 20 as core 2 on socket 1
00:05:51.411  EAL: Detected lcore 21 as core 3 on socket 1
00:05:51.411  EAL: Detected lcore 22 as core 4 on socket 1
00:05:51.411  EAL: Detected lcore 23 as core 8 on socket 1
00:05:51.411  EAL: Detected lcore 24 as core 9 on socket 1
00:05:51.411  EAL: Detected lcore 25 as core 10 on socket 1
00:05:51.411  EAL: Detected lcore 26 as core 11 on socket 1
00:05:51.411  EAL: Detected lcore 27 as core 16 on socket 1
00:05:51.411  EAL: Detected lcore 28 as core 17 on socket 1
00:05:51.411  EAL: Detected lcore 29 as core 18 on socket 1
00:05:51.411  EAL: Detected lcore 30 as core 19 on socket 1
00:05:51.411  EAL: Detected lcore 31 as core 20 on socket 1
00:05:51.411  EAL: Detected lcore 32 as core 24 on socket 1
00:05:51.411  EAL: Detected lcore 33 as core 25 on socket 1
00:05:51.411  EAL: Detected lcore 34 as core 26 on socket 1
00:05:51.411  EAL: Detected lcore 35 as core 27 on socket 1
00:05:51.411  EAL: Detected lcore 36 as core 0 on socket 0
00:05:51.411  EAL: Detected lcore 37 as core 1 on socket 0
00:05:51.411  EAL: Detected lcore 38 as core 2 on socket 0
00:05:51.411  EAL: Detected lcore 39 as core 3 on socket 0
00:05:51.411  EAL: Detected lcore 40 as core 4 on socket 0
00:05:51.411  EAL: Detected lcore 41 as core 8 on socket 0
00:05:51.411  EAL: Detected lcore 42 as core 9 on socket 0
00:05:51.411  EAL: Detected lcore 43 as core 10 on socket 0
00:05:51.411  EAL: Detected lcore 44 as core 11 on socket 0
00:05:51.411  EAL: Detected lcore 45 as core 16 on socket 0
00:05:51.411  EAL: Detected lcore 46 as core 17 on socket 0
00:05:51.411  EAL: Detected lcore 47 as core 18 on socket 0
00:05:51.411  EAL: Detected lcore 48 as core 19 on socket 0
00:05:51.411  EAL: Detected lcore 49 as core 20 on socket 0
00:05:51.411  EAL: Detected lcore 50 as core 24 on socket 0
00:05:51.411  EAL: Detected lcore 51 as core 25 on socket 0
00:05:51.411  EAL: Detected lcore 52 as core 26 on socket 0
00:05:51.411  EAL: Detected lcore 53 as core 27 on socket 0
00:05:51.411  EAL: Detected lcore 54 as core 0 on socket 1
00:05:51.411  EAL: Detected lcore 55 as core 1 on socket 1
00:05:51.411  EAL: Detected lcore 56 as core 2 on socket 1
00:05:51.411  EAL: Detected lcore 57 as core 3 on socket 1
00:05:51.411  EAL: Detected lcore 58 as core 4 on socket 1
00:05:51.411  EAL: Detected lcore 59 as core 8 on socket 1
00:05:51.411  EAL: Detected lcore 60 as core 9 on socket 1
00:05:51.411  EAL: Detected lcore 61 as core 10 on socket 1
00:05:51.411  EAL: Detected lcore 62 as core 11 on socket 1
00:05:51.411  EAL: Detected lcore 63 as core 16 on socket 1
00:05:51.411  EAL: Detected lcore 64 as core 17 on socket 1
00:05:51.411  EAL: Detected lcore 65 as core 18 on socket 1
00:05:51.411  EAL: Detected lcore 66 as core 19 on socket 1
00:05:51.411  EAL: Detected lcore 67 as core 20 on socket 1
00:05:51.411  EAL: Detected lcore 68 as core 24 on socket 1
00:05:51.411  EAL: Detected lcore 69 as core 25 on socket 1
00:05:51.411  EAL: Detected lcore 70 as core 26 on socket 1
00:05:51.411  EAL: Detected lcore 71 as core 27 on socket 1
00:05:51.411  EAL: Maximum logical cores by configuration: 128
00:05:51.411  EAL: Detected CPU lcores: 72
00:05:51.411  EAL: Detected NUMA nodes: 2
00:05:51.411  EAL: Checking presence of .so 'librte_eal.so.24.0'
00:05:51.411  EAL: Detected shared linkage of DPDK
00:05:51.411  EAL: No shared files mode enabled, IPC will be disabled
00:05:51.411  EAL: Bus pci wants IOVA as 'DC'
00:05:51.411  EAL: Buses did not request a specific IOVA mode.
00:05:51.411  EAL: IOMMU is available, selecting IOVA as VA mode.
00:05:51.411  EAL: Selected IOVA mode 'VA'
00:05:51.411  EAL: No free 2048 kB hugepages reported on node 1
00:05:51.411  EAL: Probing VFIO support...
00:05:51.411  EAL: IOMMU type 1 (Type 1) is supported
00:05:51.411  EAL: IOMMU type 7 (sPAPR) is not supported
00:05:51.411  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:05:51.411  EAL: VFIO support initialized
00:05:51.411  EAL: Ask a virtual area of 0x2e000 bytes
00:05:51.411  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:05:51.411  EAL: Setting up physically contiguous memory...
00:05:51.411  EAL: Setting maximum number of open files to 524288
00:05:51.411  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:05:51.411  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:05:51.411  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:05:51.411  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:05:51.411  EAL: Ask a virtual area of 0x61000 bytes
00:05:51.411  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:05:51.411  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:05:51.411  EAL: Ask a virtual area of 0x400000000 bytes
00:05:51.411  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:05:51.411  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:05:51.411  EAL: Hugepages will be freed exactly as allocated.
00:05:51.411  EAL: No shared files mode enabled, IPC is disabled
00:05:51.411  EAL: No shared files mode enabled, IPC is disabled
00:05:51.411  EAL: TSC frequency is ~2300000 KHz
00:05:51.411  EAL: Main lcore 0 is ready (tid=7fd569638a00;cpuset=[0])
00:05:51.411  EAL: Trying to obtain current memory policy.
00:05:51.411  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.411  EAL: Restoring previous memory policy: 0
00:05:51.411  EAL: request: mp_malloc_sync
00:05:51.411  EAL: No shared files mode enabled, IPC is disabled
00:05:51.411  EAL: Heap on socket 0 was expanded by 2MB
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:05:51.412  EAL: Mem event callback 'spdk:(nil)' registered
00:05:51.412  
00:05:51.412  
00:05:51.412       CUnit - A unit testing framework for C - Version 2.1-3
00:05:51.412       http://cunit.sourceforge.net/
00:05:51.412  
00:05:51.412  
00:05:51.412  Suite: components_suite
00:05:51.412    Test: vtophys_malloc_test ...passed
00:05:51.412    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 4MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was shrunk by 4MB
00:05:51.412  EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 6MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was shrunk by 6MB
00:05:51.412  EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 10MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was shrunk by 10MB
00:05:51.412  EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 18MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was shrunk by 18MB
00:05:51.412  EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 34MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was shrunk by 34MB
00:05:51.412  EAL: Trying to obtain current memory policy.
00:05:51.412  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.412  EAL: Restoring previous memory policy: 4
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.412  EAL: request: mp_malloc_sync
00:05:51.412  EAL: No shared files mode enabled, IPC is disabled
00:05:51.412  EAL: Heap on socket 0 was expanded by 66MB
00:05:51.412  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.671  EAL: request: mp_malloc_sync
00:05:51.671  EAL: No shared files mode enabled, IPC is disabled
00:05:51.671  EAL: Heap on socket 0 was shrunk by 66MB
00:05:51.671  EAL: Trying to obtain current memory policy.
00:05:51.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.671  EAL: Restoring previous memory policy: 4
00:05:51.671  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.671  EAL: request: mp_malloc_sync
00:05:51.671  EAL: No shared files mode enabled, IPC is disabled
00:05:51.671  EAL: Heap on socket 0 was expanded by 130MB
00:05:51.671  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.671  EAL: request: mp_malloc_sync
00:05:51.671  EAL: No shared files mode enabled, IPC is disabled
00:05:51.671  EAL: Heap on socket 0 was shrunk by 130MB
00:05:51.671  EAL: Trying to obtain current memory policy.
00:05:51.671  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.671  EAL: Restoring previous memory policy: 4
00:05:51.671  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.671  EAL: request: mp_malloc_sync
00:05:51.671  EAL: No shared files mode enabled, IPC is disabled
00:05:51.671  EAL: Heap on socket 0 was expanded by 258MB
00:05:51.671  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.672  EAL: request: mp_malloc_sync
00:05:51.672  EAL: No shared files mode enabled, IPC is disabled
00:05:51.672  EAL: Heap on socket 0 was shrunk by 258MB
00:05:51.672  EAL: Trying to obtain current memory policy.
00:05:51.672  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:51.931  EAL: Restoring previous memory policy: 4
00:05:51.931  EAL: Calling mem event callback 'spdk:(nil)'
00:05:51.931  EAL: request: mp_malloc_sync
00:05:51.931  EAL: No shared files mode enabled, IPC is disabled
00:05:51.931  EAL: Heap on socket 0 was expanded by 514MB
00:05:51.931  EAL: Calling mem event callback 'spdk:(nil)'
00:05:52.190  EAL: request: mp_malloc_sync
00:05:52.190  EAL: No shared files mode enabled, IPC is disabled
00:05:52.190  EAL: Heap on socket 0 was shrunk by 514MB
00:05:52.190  EAL: Trying to obtain current memory policy.
00:05:52.190  EAL: Setting policy MPOL_PREFERRED for socket 0
00:05:52.449  EAL: Restoring previous memory policy: 4
00:05:52.449  EAL: Calling mem event callback 'spdk:(nil)'
00:05:52.449  EAL: request: mp_malloc_sync
00:05:52.449  EAL: No shared files mode enabled, IPC is disabled
00:05:52.449  EAL: Heap on socket 0 was expanded by 1026MB
00:05:52.449  EAL: Calling mem event callback 'spdk:(nil)'
00:05:52.708  EAL: request: mp_malloc_sync
00:05:52.708  EAL: No shared files mode enabled, IPC is disabled
00:05:52.708  EAL: Heap on socket 0 was shrunk by 1026MB
00:05:52.708  passed
00:05:52.708  
00:05:52.708  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:52.708                suites      1      1    n/a      0        0
00:05:52.708                 tests      2      2      2      0        0
00:05:52.708               asserts    497    497    497      0      n/a
00:05:52.708  
00:05:52.708  Elapsed time =    1.186 seconds
00:05:52.708  EAL: Calling mem event callback 'spdk:(nil)'
00:05:52.708  EAL: request: mp_malloc_sync
00:05:52.708  EAL: No shared files mode enabled, IPC is disabled
00:05:52.708  EAL: Heap on socket 0 was shrunk by 2MB
00:05:52.708  EAL: No shared files mode enabled, IPC is disabled
00:05:52.708  EAL: No shared files mode enabled, IPC is disabled
00:05:52.708  EAL: No shared files mode enabled, IPC is disabled
00:05:52.708  
00:05:52.708  real	0m1.411s
00:05:52.708  user	0m0.774s
00:05:52.708  sys	0m0.608s
00:05:52.708   10:43:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:52.708   10:43:41	-- common/autotest_common.sh@10 -- # set +x
00:05:52.708  ************************************
00:05:52.708  END TEST env_vtophys
00:05:52.708  ************************************
00:05:52.708   10:43:41	-- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut
00:05:52.708   10:43:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:52.708   10:43:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:52.708   10:43:41	-- common/autotest_common.sh@10 -- # set +x
00:05:52.708  ************************************
00:05:52.708  START TEST env_pci
00:05:52.708  ************************************
00:05:52.708   10:43:41	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/pci/pci_ut
00:05:52.708  
00:05:52.708  
00:05:52.708       CUnit - A unit testing framework for C - Version 2.1-3
00:05:52.708       http://cunit.sourceforge.net/
00:05:52.708  
00:05:52.708  
00:05:52.708  Suite: pci
00:05:52.708    Test: pci_hook ...[2024-12-15 10:43:41.656395] /var/jenkins/workspace/nvme-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2077765 has claimed it
00:05:52.708  EAL: Cannot find device (10000:00:01.0)
00:05:52.708  EAL: Failed to attach device on primary process
00:05:52.708  passed
00:05:52.708  
00:05:52.708  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:52.708                suites      1      1    n/a      0        0
00:05:52.708                 tests      1      1      1      0        0
00:05:52.708               asserts     25     25     25      0      n/a
00:05:52.708  
00:05:52.708  Elapsed time =    0.037 seconds
00:05:52.708  
00:05:52.708  real	0m0.059s
00:05:52.708  user	0m0.020s
00:05:52.708  sys	0m0.039s
00:05:52.708   10:43:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:52.708   10:43:41	-- common/autotest_common.sh@10 -- # set +x
00:05:52.708  ************************************
00:05:52.708  END TEST env_pci
00:05:52.708  ************************************
00:05:52.967   10:43:41	-- env/env.sh@14 -- # argv='-c 0x1 '
00:05:52.967    10:43:41	-- env/env.sh@15 -- # uname
00:05:52.967   10:43:41	-- env/env.sh@15 -- # '[' Linux = Linux ']'
00:05:52.967   10:43:41	-- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:05:52.967   10:43:41	-- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:05:52.967   10:43:41	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:05:52.967   10:43:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:52.967   10:43:41	-- common/autotest_common.sh@10 -- # set +x
00:05:52.967  ************************************
00:05:52.967  START TEST env_dpdk_post_init
00:05:52.967  ************************************
00:05:52.967   10:43:41	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:05:52.967  EAL: Detected CPU lcores: 72
00:05:52.967  EAL: Detected NUMA nodes: 2
00:05:52.967  EAL: Detected shared linkage of DPDK
00:05:52.967  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:05:52.967  EAL: Selected IOVA mode 'VA'
00:05:52.967  EAL: No free 2048 kB hugepages reported on node 1
00:05:52.967  EAL: VFIO support initialized
00:05:52.967  TELEMETRY: No legacy callbacks, legacy socket not created
00:05:52.967  EAL: Using IOMMU type 1 (Type 1)
00:05:52.967  EAL: Ignore mapping IO port bar(1)
00:05:52.967  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0)
00:05:52.967  EAL: Ignore mapping IO port bar(1)
00:05:52.967  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0)
00:05:52.967  EAL: Ignore mapping IO port bar(1)
00:05:52.967  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0)
00:05:52.967  EAL: Ignore mapping IO port bar(1)
00:05:52.967  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0)
00:05:53.226  EAL: Ignore mapping IO port bar(1)
00:05:53.226  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0)
00:05:53.226  EAL: Ignore mapping IO port bar(1)
00:05:53.226  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0)
00:05:53.226  EAL: Ignore mapping IO port bar(1)
00:05:53.226  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0)
00:05:53.226  EAL: Ignore mapping IO port bar(1)
00:05:53.226  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0)
00:05:53.793  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0)
00:05:53.793  EAL: Ignore mapping IO port bar(1)
00:05:53.793  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1)
00:05:53.793  EAL: Ignore mapping IO port bar(1)
00:05:53.794  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1)
00:05:53.794  EAL: Ignore mapping IO port bar(1)
00:05:53.794  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1)
00:05:54.051  EAL: Ignore mapping IO port bar(1)
00:05:54.051  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1)
00:05:54.051  EAL: Ignore mapping IO port bar(1)
00:05:54.051  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1)
00:05:54.051  EAL: Ignore mapping IO port bar(1)
00:05:54.051  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1)
00:05:54.051  EAL: Ignore mapping IO port bar(1)
00:05:54.051  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1)
00:05:54.051  EAL: Ignore mapping IO port bar(1)
00:05:54.051  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1)
00:05:59.316  EAL: Releasing PCI mapped resource for 0000:5e:00.0
00:05:59.316  EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000
00:05:59.575  Starting DPDK initialization...
00:05:59.575  Starting SPDK post initialization...
00:05:59.575  SPDK NVMe probe
00:05:59.575  Attaching to 0000:5e:00.0
00:05:59.575  Attached to 0000:5e:00.0
00:05:59.575  Cleaning up...
00:05:59.575  
00:05:59.575  real	0m6.749s
00:05:59.575  user	0m5.121s
00:05:59.575  sys	0m0.682s
00:05:59.575   10:43:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:59.575   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:05:59.575  ************************************
00:05:59.575  END TEST env_dpdk_post_init
00:05:59.575  ************************************
00:05:59.575    10:43:48	-- env/env.sh@26 -- # uname
00:05:59.575   10:43:48	-- env/env.sh@26 -- # '[' Linux = Linux ']'
00:05:59.575   10:43:48	-- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:59.575   10:43:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:59.575   10:43:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:59.575   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:05:59.575  ************************************
00:05:59.575  START TEST env_mem_callbacks
00:05:59.575  ************************************
00:05:59.575   10:43:48	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:05:59.575  EAL: Detected CPU lcores: 72
00:05:59.575  EAL: Detected NUMA nodes: 2
00:05:59.575  EAL: Detected shared linkage of DPDK
00:05:59.575  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:05:59.834  EAL: Selected IOVA mode 'VA'
00:05:59.834  EAL: No free 2048 kB hugepages reported on node 1
00:05:59.834  EAL: VFIO support initialized
00:05:59.834  TELEMETRY: No legacy callbacks, legacy socket not created
00:05:59.834  
00:05:59.834  
00:05:59.834       CUnit - A unit testing framework for C - Version 2.1-3
00:05:59.834       http://cunit.sourceforge.net/
00:05:59.834  
00:05:59.834  
00:05:59.834  Suite: memory
00:05:59.834    Test: test ...
00:05:59.834  register 0x200000200000 2097152
00:05:59.834  malloc 3145728
00:05:59.834  register 0x200000400000 4194304
00:05:59.834  buf 0x200000500000 len 3145728 PASSED
00:05:59.834  malloc 64
00:05:59.834  buf 0x2000004fff40 len 64 PASSED
00:05:59.834  malloc 4194304
00:05:59.834  register 0x200000800000 6291456
00:05:59.834  buf 0x200000a00000 len 4194304 PASSED
00:05:59.834  free 0x200000500000 3145728
00:05:59.834  free 0x2000004fff40 64
00:05:59.834  unregister 0x200000400000 4194304 PASSED
00:05:59.834  free 0x200000a00000 4194304
00:05:59.834  unregister 0x200000800000 6291456 PASSED
00:05:59.834  malloc 8388608
00:05:59.834  register 0x200000400000 10485760
00:05:59.834  buf 0x200000600000 len 8388608 PASSED
00:05:59.834  free 0x200000600000 8388608
00:05:59.834  unregister 0x200000400000 10485760 PASSED
00:05:59.834  passed
00:05:59.834  
00:05:59.834  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:59.834                suites      1      1    n/a      0        0
00:05:59.834                 tests      1      1      1      0        0
00:05:59.834               asserts     15     15     15      0      n/a
00:05:59.834  
00:05:59.834  Elapsed time =    0.008 seconds
00:05:59.834  
00:05:59.834  real	0m0.081s
00:05:59.834  user	0m0.028s
00:05:59.834  sys	0m0.053s
00:05:59.834   10:43:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:59.834   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:05:59.834  ************************************
00:05:59.834  END TEST env_mem_callbacks
00:05:59.834  ************************************
00:05:59.834  
00:05:59.834  real	0m8.980s
00:05:59.834  user	0m6.362s
00:05:59.834  sys	0m1.700s
00:05:59.834   10:43:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:05:59.834   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:05:59.834  ************************************
00:05:59.834  END TEST env
00:05:59.834  ************************************
00:05:59.834   10:43:48	-- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh
00:05:59.834   10:43:48	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:05:59.834   10:43:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:05:59.834   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:05:59.834  ************************************
00:05:59.834  START TEST rpc
00:05:59.834  ************************************
00:05:59.834   10:43:48	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc/rpc.sh
00:05:59.834  * Looking for test storage...
00:05:59.834  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:05:59.834    10:43:48	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:05:59.834     10:43:48	-- common/autotest_common.sh@1690 -- # lcov --version
00:05:59.834     10:43:48	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:00.093    10:43:48	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:00.093    10:43:48	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:00.093    10:43:48	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:00.093    10:43:48	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:00.093    10:43:48	-- scripts/common.sh@335 -- # IFS=.-:
00:06:00.093    10:43:48	-- scripts/common.sh@335 -- # read -ra ver1
00:06:00.093    10:43:48	-- scripts/common.sh@336 -- # IFS=.-:
00:06:00.093    10:43:48	-- scripts/common.sh@336 -- # read -ra ver2
00:06:00.093    10:43:48	-- scripts/common.sh@337 -- # local 'op=<'
00:06:00.093    10:43:48	-- scripts/common.sh@339 -- # ver1_l=2
00:06:00.093    10:43:48	-- scripts/common.sh@340 -- # ver2_l=1
00:06:00.093    10:43:48	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:00.094    10:43:48	-- scripts/common.sh@343 -- # case "$op" in
00:06:00.094    10:43:48	-- scripts/common.sh@344 -- # : 1
00:06:00.094    10:43:48	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:00.094    10:43:48	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:00.094     10:43:48	-- scripts/common.sh@364 -- # decimal 1
00:06:00.094     10:43:48	-- scripts/common.sh@352 -- # local d=1
00:06:00.094     10:43:48	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:00.094     10:43:48	-- scripts/common.sh@354 -- # echo 1
00:06:00.094    10:43:48	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:00.094     10:43:48	-- scripts/common.sh@365 -- # decimal 2
00:06:00.094     10:43:48	-- scripts/common.sh@352 -- # local d=2
00:06:00.094     10:43:48	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:00.094     10:43:48	-- scripts/common.sh@354 -- # echo 2
00:06:00.094    10:43:48	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:00.094    10:43:48	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:00.094    10:43:48	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:00.094    10:43:48	-- scripts/common.sh@367 -- # return 0
00:06:00.094    10:43:48	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:00.094    10:43:48	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:00.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:00.094  		--rc genhtml_branch_coverage=1
00:06:00.094  		--rc genhtml_function_coverage=1
00:06:00.094  		--rc genhtml_legend=1
00:06:00.094  		--rc geninfo_all_blocks=1
00:06:00.094  		--rc geninfo_unexecuted_blocks=1
00:06:00.094  		
00:06:00.094  		'
00:06:00.094    10:43:48	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:00.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:00.094  		--rc genhtml_branch_coverage=1
00:06:00.094  		--rc genhtml_function_coverage=1
00:06:00.094  		--rc genhtml_legend=1
00:06:00.094  		--rc geninfo_all_blocks=1
00:06:00.094  		--rc geninfo_unexecuted_blocks=1
00:06:00.094  		
00:06:00.094  		'
00:06:00.094    10:43:48	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:00.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:00.094  		--rc genhtml_branch_coverage=1
00:06:00.094  		--rc genhtml_function_coverage=1
00:06:00.094  		--rc genhtml_legend=1
00:06:00.094  		--rc geninfo_all_blocks=1
00:06:00.094  		--rc geninfo_unexecuted_blocks=1
00:06:00.094  		
00:06:00.094  		'
00:06:00.094    10:43:48	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:00.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:00.094  		--rc genhtml_branch_coverage=1
00:06:00.094  		--rc genhtml_function_coverage=1
00:06:00.094  		--rc genhtml_legend=1
00:06:00.094  		--rc geninfo_all_blocks=1
00:06:00.094  		--rc geninfo_unexecuted_blocks=1
00:06:00.094  		
00:06:00.094  		'
00:06:00.094   10:43:48	-- rpc/rpc.sh@65 -- # spdk_pid=2078948
00:06:00.094   10:43:48	-- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:06:00.094   10:43:48	-- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:06:00.094   10:43:48	-- rpc/rpc.sh@67 -- # waitforlisten 2078948
00:06:00.094   10:43:48	-- common/autotest_common.sh@829 -- # '[' -z 2078948 ']'
00:06:00.094   10:43:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:00.094   10:43:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:00.094   10:43:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:00.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:00.094   10:43:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:00.094   10:43:48	-- common/autotest_common.sh@10 -- # set +x
00:06:00.094  [2024-12-15 10:43:48.959150] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:00.094  [2024-12-15 10:43:48.959225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2078948 ]
00:06:00.094  EAL: No free 2048 kB hugepages reported on node 1
00:06:00.094  [2024-12-15 10:43:49.065090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:00.353  [2024-12-15 10:43:49.172348] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:00.353  [2024-12-15 10:43:49.172496] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:06:00.353  [2024-12-15 10:43:49.172513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2078948' to capture a snapshot of events at runtime.
00:06:00.353  [2024-12-15 10:43:49.172528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2078948 for offline analysis/debug.
00:06:00.353  [2024-12-15 10:43:49.172557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:00.353  [2024-12-15 10:43:49.368644] 'OCF_Core' volume operations registered
00:06:00.612  [2024-12-15 10:43:49.372313] 'OCF_Cache' volume operations registered
00:06:00.612  [2024-12-15 10:43:49.376320] 'OCF Composite' volume operations registered
00:06:00.612  [2024-12-15 10:43:49.379841] 'SPDK_block_device' volume operations registered
00:06:01.547   10:43:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:01.547   10:43:50	-- common/autotest_common.sh@862 -- # return 0
00:06:01.547   10:43:50	-- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:06:01.547   10:43:50	-- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc
00:06:01.547   10:43:50	-- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:06:01.547   10:43:50	-- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:06:01.547   10:43:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:01.547   10:43:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:01.547   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547  ************************************
00:06:01.547  START TEST rpc_integrity
00:06:01.547  ************************************
00:06:01.547   10:43:50	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:06:01.547    10:43:50	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:01.547    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:01.547    10:43:50	-- rpc/rpc.sh@13 -- # jq length
00:06:01.547   10:43:50	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:01.547    10:43:50	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:01.547    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@15 -- # malloc=Malloc0
00:06:01.547    10:43:50	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:01.547    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@16 -- # bdevs='[
00:06:01.547  {
00:06:01.547  "name": "Malloc0",
00:06:01.547  "aliases": [
00:06:01.547  "ef7c3498-13a0-46fd-a80b-67f83c9afb99"
00:06:01.547  ],
00:06:01.547  "product_name": "Malloc disk",
00:06:01.547  "block_size": 512,
00:06:01.547  "num_blocks": 16384,
00:06:01.547  "uuid": "ef7c3498-13a0-46fd-a80b-67f83c9afb99",
00:06:01.547  "assigned_rate_limits": {
00:06:01.547  "rw_ios_per_sec": 0,
00:06:01.547  "rw_mbytes_per_sec": 0,
00:06:01.547  "r_mbytes_per_sec": 0,
00:06:01.547  "w_mbytes_per_sec": 0
00:06:01.547  },
00:06:01.547  "claimed": false,
00:06:01.547  "zoned": false,
00:06:01.547  "supported_io_types": {
00:06:01.547  "read": true,
00:06:01.547  "write": true,
00:06:01.547  "unmap": true,
00:06:01.547  "write_zeroes": true,
00:06:01.547  "flush": true,
00:06:01.547  "reset": true,
00:06:01.547  "compare": false,
00:06:01.547  "compare_and_write": false,
00:06:01.547  "abort": true,
00:06:01.547  "nvme_admin": false,
00:06:01.547  "nvme_io": false
00:06:01.547  },
00:06:01.547  "memory_domains": [
00:06:01.547  {
00:06:01.547  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:01.547  "dma_device_type": 2
00:06:01.547  }
00:06:01.547  ],
00:06:01.547  "driver_specific": {}
00:06:01.547  }
00:06:01.547  ]'
00:06:01.547    10:43:50	-- rpc/rpc.sh@17 -- # jq length
00:06:01.547   10:43:50	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:01.547   10:43:50	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:06:01.547   10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547  [2024-12-15 10:43:50.328129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:06:01.547  [2024-12-15 10:43:50.328173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:01.547  [2024-12-15 10:43:50.328191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23051c0
00:06:01.547  [2024-12-15 10:43:50.328203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:01.547  [2024-12-15 10:43:50.329714] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:01.547  [2024-12-15 10:43:50.329745] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:01.547  Passthru0
00:06:01.547   10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547    10:43:50	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:01.547    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@20 -- # bdevs='[
00:06:01.547  {
00:06:01.547  "name": "Malloc0",
00:06:01.547  "aliases": [
00:06:01.547  "ef7c3498-13a0-46fd-a80b-67f83c9afb99"
00:06:01.547  ],
00:06:01.547  "product_name": "Malloc disk",
00:06:01.547  "block_size": 512,
00:06:01.547  "num_blocks": 16384,
00:06:01.547  "uuid": "ef7c3498-13a0-46fd-a80b-67f83c9afb99",
00:06:01.547  "assigned_rate_limits": {
00:06:01.547  "rw_ios_per_sec": 0,
00:06:01.547  "rw_mbytes_per_sec": 0,
00:06:01.547  "r_mbytes_per_sec": 0,
00:06:01.547  "w_mbytes_per_sec": 0
00:06:01.547  },
00:06:01.547  "claimed": true,
00:06:01.547  "claim_type": "exclusive_write",
00:06:01.547  "zoned": false,
00:06:01.547  "supported_io_types": {
00:06:01.547  "read": true,
00:06:01.547  "write": true,
00:06:01.547  "unmap": true,
00:06:01.547  "write_zeroes": true,
00:06:01.547  "flush": true,
00:06:01.547  "reset": true,
00:06:01.547  "compare": false,
00:06:01.547  "compare_and_write": false,
00:06:01.547  "abort": true,
00:06:01.547  "nvme_admin": false,
00:06:01.547  "nvme_io": false
00:06:01.547  },
00:06:01.547  "memory_domains": [
00:06:01.547  {
00:06:01.547  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:01.547  "dma_device_type": 2
00:06:01.547  }
00:06:01.547  ],
00:06:01.547  "driver_specific": {}
00:06:01.547  },
00:06:01.547  {
00:06:01.547  "name": "Passthru0",
00:06:01.547  "aliases": [
00:06:01.547  "88c6a5ee-8f73-5a6e-beb1-309a919baeb1"
00:06:01.547  ],
00:06:01.547  "product_name": "passthru",
00:06:01.547  "block_size": 512,
00:06:01.547  "num_blocks": 16384,
00:06:01.547  "uuid": "88c6a5ee-8f73-5a6e-beb1-309a919baeb1",
00:06:01.547  "assigned_rate_limits": {
00:06:01.547  "rw_ios_per_sec": 0,
00:06:01.547  "rw_mbytes_per_sec": 0,
00:06:01.547  "r_mbytes_per_sec": 0,
00:06:01.547  "w_mbytes_per_sec": 0
00:06:01.547  },
00:06:01.547  "claimed": false,
00:06:01.547  "zoned": false,
00:06:01.547  "supported_io_types": {
00:06:01.547  "read": true,
00:06:01.547  "write": true,
00:06:01.547  "unmap": true,
00:06:01.547  "write_zeroes": true,
00:06:01.547  "flush": true,
00:06:01.547  "reset": true,
00:06:01.547  "compare": false,
00:06:01.547  "compare_and_write": false,
00:06:01.547  "abort": true,
00:06:01.547  "nvme_admin": false,
00:06:01.547  "nvme_io": false
00:06:01.547  },
00:06:01.547  "memory_domains": [
00:06:01.547  {
00:06:01.547  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:01.547  "dma_device_type": 2
00:06:01.547  }
00:06:01.547  ],
00:06:01.547  "driver_specific": {
00:06:01.547  "passthru": {
00:06:01.547  "name": "Passthru0",
00:06:01.547  "base_bdev_name": "Malloc0"
00:06:01.547  }
00:06:01.547  }
00:06:01.547  }
00:06:01.547  ]'
00:06:01.547    10:43:50	-- rpc/rpc.sh@21 -- # jq length
00:06:01.547   10:43:50	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:01.547   10:43:50	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:01.547   10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547   10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:06:01.547   10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547   10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547    10:43:50	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:01.547    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.547    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.547   10:43:50	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:01.547    10:43:50	-- rpc/rpc.sh@26 -- # jq length
00:06:01.547   10:43:50	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:01.547  
00:06:01.547  real	0m0.272s
00:06:01.547  user	0m0.158s
00:06:01.547  sys	0m0.051s
00:06:01.547   10:43:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:01.547   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.547  ************************************
00:06:01.547  END TEST rpc_integrity
00:06:01.547  ************************************
00:06:01.547   10:43:50	-- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:06:01.548   10:43:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:01.548   10:43:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:01.548   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.548  ************************************
00:06:01.548  START TEST rpc_plugins
00:06:01.548  ************************************
00:06:01.548   10:43:50	-- common/autotest_common.sh@1114 -- # rpc_plugins
00:06:01.548    10:43:50	-- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:06:01.548    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.548    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.548    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.548   10:43:50	-- rpc/rpc.sh@30 -- # malloc=Malloc1
00:06:01.548    10:43:50	-- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:06:01.548    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.548    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.806   10:43:50	-- rpc/rpc.sh@31 -- # bdevs='[
00:06:01.806  {
00:06:01.806  "name": "Malloc1",
00:06:01.806  "aliases": [
00:06:01.806  "3235ffa9-bf31-426b-9b58-a9a8af048d21"
00:06:01.806  ],
00:06:01.806  "product_name": "Malloc disk",
00:06:01.806  "block_size": 4096,
00:06:01.806  "num_blocks": 256,
00:06:01.806  "uuid": "3235ffa9-bf31-426b-9b58-a9a8af048d21",
00:06:01.806  "assigned_rate_limits": {
00:06:01.806  "rw_ios_per_sec": 0,
00:06:01.806  "rw_mbytes_per_sec": 0,
00:06:01.806  "r_mbytes_per_sec": 0,
00:06:01.806  "w_mbytes_per_sec": 0
00:06:01.806  },
00:06:01.806  "claimed": false,
00:06:01.806  "zoned": false,
00:06:01.806  "supported_io_types": {
00:06:01.806  "read": true,
00:06:01.806  "write": true,
00:06:01.806  "unmap": true,
00:06:01.806  "write_zeroes": true,
00:06:01.806  "flush": true,
00:06:01.806  "reset": true,
00:06:01.806  "compare": false,
00:06:01.806  "compare_and_write": false,
00:06:01.806  "abort": true,
00:06:01.806  "nvme_admin": false,
00:06:01.806  "nvme_io": false
00:06:01.806  },
00:06:01.806  "memory_domains": [
00:06:01.806  {
00:06:01.806  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:01.806  "dma_device_type": 2
00:06:01.806  }
00:06:01.806  ],
00:06:01.806  "driver_specific": {}
00:06:01.806  }
00:06:01.806  ]'
00:06:01.806    10:43:50	-- rpc/rpc.sh@32 -- # jq length
00:06:01.806   10:43:50	-- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:06:01.806   10:43:50	-- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:06:01.806   10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.806   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806   10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.806    10:43:50	-- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:06:01.806    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.806    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.806   10:43:50	-- rpc/rpc.sh@35 -- # bdevs='[]'
00:06:01.806    10:43:50	-- rpc/rpc.sh@36 -- # jq length
00:06:01.806   10:43:50	-- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:06:01.806  
00:06:01.806  real	0m0.152s
00:06:01.806  user	0m0.092s
00:06:01.806  sys	0m0.025s
00:06:01.806   10:43:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:01.806   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806  ************************************
00:06:01.806  END TEST rpc_plugins
00:06:01.806  ************************************
00:06:01.806   10:43:50	-- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:06:01.806   10:43:50	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:01.806   10:43:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:01.806   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806  ************************************
00:06:01.806  START TEST rpc_trace_cmd_test
00:06:01.806  ************************************
00:06:01.806   10:43:50	-- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test
00:06:01.806   10:43:50	-- rpc/rpc.sh@40 -- # local info
00:06:01.806    10:43:50	-- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:06:01.806    10:43:50	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:01.806    10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:01.806    10:43:50	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:01.806   10:43:50	-- rpc/rpc.sh@42 -- # info='{
00:06:01.806  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2078948",
00:06:01.806  "tpoint_group_mask": "0x8",
00:06:01.806  "iscsi_conn": {
00:06:01.806  "mask": "0x2",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "scsi": {
00:06:01.806  "mask": "0x4",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "bdev": {
00:06:01.806  "mask": "0x8",
00:06:01.806  "tpoint_mask": "0xffffffffffffffff"
00:06:01.806  },
00:06:01.806  "nvmf_rdma": {
00:06:01.806  "mask": "0x10",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "nvmf_tcp": {
00:06:01.806  "mask": "0x20",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "ftl": {
00:06:01.806  "mask": "0x40",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "blobfs": {
00:06:01.806  "mask": "0x80",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "dsa": {
00:06:01.806  "mask": "0x200",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "thread": {
00:06:01.806  "mask": "0x400",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "nvme_pcie": {
00:06:01.806  "mask": "0x800",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "iaa": {
00:06:01.806  "mask": "0x1000",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "nvme_tcp": {
00:06:01.806  "mask": "0x2000",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  },
00:06:01.806  "bdev_nvme": {
00:06:01.806  "mask": "0x4000",
00:06:01.806  "tpoint_mask": "0x0"
00:06:01.806  }
00:06:01.806  }'
00:06:01.806    10:43:50	-- rpc/rpc.sh@43 -- # jq length
00:06:01.806   10:43:50	-- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']'
00:06:01.806    10:43:50	-- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:06:02.064   10:43:50	-- rpc/rpc.sh@44 -- # '[' true = true ']'
00:06:02.064    10:43:50	-- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:06:02.064   10:43:50	-- rpc/rpc.sh@45 -- # '[' true = true ']'
00:06:02.064    10:43:50	-- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:06:02.064   10:43:50	-- rpc/rpc.sh@46 -- # '[' true = true ']'
00:06:02.064    10:43:50	-- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:06:02.064   10:43:50	-- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:06:02.064  
00:06:02.064  real	0m0.251s
00:06:02.064  user	0m0.201s
00:06:02.064  sys	0m0.043s
00:06:02.064   10:43:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:02.064   10:43:50	-- common/autotest_common.sh@10 -- # set +x
00:06:02.064  ************************************
00:06:02.064  END TEST rpc_trace_cmd_test
00:06:02.064  ************************************
00:06:02.064   10:43:51	-- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:06:02.064   10:43:51	-- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:06:02.064   10:43:51	-- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:06:02.064   10:43:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:02.064   10:43:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:02.064   10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.064  ************************************
00:06:02.064  START TEST rpc_daemon_integrity
00:06:02.064  ************************************
00:06:02.064   10:43:51	-- common/autotest_common.sh@1114 -- # rpc_integrity
00:06:02.064    10:43:51	-- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:06:02.064    10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.064    10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.064    10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.064   10:43:51	-- rpc/rpc.sh@12 -- # bdevs='[]'
00:06:02.064    10:43:51	-- rpc/rpc.sh@13 -- # jq length
00:06:02.323   10:43:51	-- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:06:02.323    10:43:51	-- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:06:02.323    10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.323    10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.323    10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.323   10:43:51	-- rpc/rpc.sh@15 -- # malloc=Malloc2
00:06:02.323    10:43:51	-- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:06:02.323    10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.323    10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.323    10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.323   10:43:51	-- rpc/rpc.sh@16 -- # bdevs='[
00:06:02.323  {
00:06:02.323  "name": "Malloc2",
00:06:02.323  "aliases": [
00:06:02.323  "894b9674-a6ff-46cd-85c7-d402bdaf9a01"
00:06:02.323  ],
00:06:02.323  "product_name": "Malloc disk",
00:06:02.323  "block_size": 512,
00:06:02.323  "num_blocks": 16384,
00:06:02.323  "uuid": "894b9674-a6ff-46cd-85c7-d402bdaf9a01",
00:06:02.323  "assigned_rate_limits": {
00:06:02.323  "rw_ios_per_sec": 0,
00:06:02.323  "rw_mbytes_per_sec": 0,
00:06:02.323  "r_mbytes_per_sec": 0,
00:06:02.323  "w_mbytes_per_sec": 0
00:06:02.323  },
00:06:02.323  "claimed": false,
00:06:02.323  "zoned": false,
00:06:02.323  "supported_io_types": {
00:06:02.323  "read": true,
00:06:02.323  "write": true,
00:06:02.323  "unmap": true,
00:06:02.323  "write_zeroes": true,
00:06:02.323  "flush": true,
00:06:02.323  "reset": true,
00:06:02.323  "compare": false,
00:06:02.323  "compare_and_write": false,
00:06:02.323  "abort": true,
00:06:02.323  "nvme_admin": false,
00:06:02.323  "nvme_io": false
00:06:02.323  },
00:06:02.323  "memory_domains": [
00:06:02.323  {
00:06:02.323  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:02.323  "dma_device_type": 2
00:06:02.323  }
00:06:02.323  ],
00:06:02.323  "driver_specific": {}
00:06:02.323  }
00:06:02.323  ]'
00:06:02.323    10:43:51	-- rpc/rpc.sh@17 -- # jq length
00:06:02.323   10:43:51	-- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:06:02.323   10:43:51	-- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:06:02.323   10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.323   10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324  [2024-12-15 10:43:51.182579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:06:02.324  [2024-12-15 10:43:51.182625] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:06:02.324  [2024-12-15 10:43:51.182644] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2307030
00:06:02.324  [2024-12-15 10:43:51.182657] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed
00:06:02.324  [2024-12-15 10:43:51.184016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:06:02.324  [2024-12-15 10:43:51.184044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:06:02.324  Passthru0
00:06:02.324   10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.324    10:43:51	-- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:06:02.324    10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.324    10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324    10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.324   10:43:51	-- rpc/rpc.sh@20 -- # bdevs='[
00:06:02.324  {
00:06:02.324  "name": "Malloc2",
00:06:02.324  "aliases": [
00:06:02.324  "894b9674-a6ff-46cd-85c7-d402bdaf9a01"
00:06:02.324  ],
00:06:02.324  "product_name": "Malloc disk",
00:06:02.324  "block_size": 512,
00:06:02.324  "num_blocks": 16384,
00:06:02.324  "uuid": "894b9674-a6ff-46cd-85c7-d402bdaf9a01",
00:06:02.324  "assigned_rate_limits": {
00:06:02.324  "rw_ios_per_sec": 0,
00:06:02.324  "rw_mbytes_per_sec": 0,
00:06:02.324  "r_mbytes_per_sec": 0,
00:06:02.324  "w_mbytes_per_sec": 0
00:06:02.324  },
00:06:02.324  "claimed": true,
00:06:02.324  "claim_type": "exclusive_write",
00:06:02.324  "zoned": false,
00:06:02.324  "supported_io_types": {
00:06:02.324  "read": true,
00:06:02.324  "write": true,
00:06:02.324  "unmap": true,
00:06:02.324  "write_zeroes": true,
00:06:02.324  "flush": true,
00:06:02.324  "reset": true,
00:06:02.324  "compare": false,
00:06:02.324  "compare_and_write": false,
00:06:02.324  "abort": true,
00:06:02.324  "nvme_admin": false,
00:06:02.324  "nvme_io": false
00:06:02.324  },
00:06:02.324  "memory_domains": [
00:06:02.324  {
00:06:02.324  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:02.324  "dma_device_type": 2
00:06:02.324  }
00:06:02.324  ],
00:06:02.324  "driver_specific": {}
00:06:02.324  },
00:06:02.324  {
00:06:02.324  "name": "Passthru0",
00:06:02.324  "aliases": [
00:06:02.324  "7b918a26-48cb-5c14-acb3-84704a6b2644"
00:06:02.324  ],
00:06:02.324  "product_name": "passthru",
00:06:02.324  "block_size": 512,
00:06:02.324  "num_blocks": 16384,
00:06:02.324  "uuid": "7b918a26-48cb-5c14-acb3-84704a6b2644",
00:06:02.324  "assigned_rate_limits": {
00:06:02.324  "rw_ios_per_sec": 0,
00:06:02.324  "rw_mbytes_per_sec": 0,
00:06:02.324  "r_mbytes_per_sec": 0,
00:06:02.324  "w_mbytes_per_sec": 0
00:06:02.324  },
00:06:02.324  "claimed": false,
00:06:02.324  "zoned": false,
00:06:02.324  "supported_io_types": {
00:06:02.324  "read": true,
00:06:02.324  "write": true,
00:06:02.324  "unmap": true,
00:06:02.324  "write_zeroes": true,
00:06:02.324  "flush": true,
00:06:02.324  "reset": true,
00:06:02.324  "compare": false,
00:06:02.324  "compare_and_write": false,
00:06:02.324  "abort": true,
00:06:02.324  "nvme_admin": false,
00:06:02.324  "nvme_io": false
00:06:02.324  },
00:06:02.324  "memory_domains": [
00:06:02.324  {
00:06:02.324  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:02.324  "dma_device_type": 2
00:06:02.324  }
00:06:02.324  ],
00:06:02.324  "driver_specific": {
00:06:02.324  "passthru": {
00:06:02.324  "name": "Passthru0",
00:06:02.324  "base_bdev_name": "Malloc2"
00:06:02.324  }
00:06:02.324  }
00:06:02.324  }
00:06:02.324  ]'
00:06:02.324    10:43:51	-- rpc/rpc.sh@21 -- # jq length
00:06:02.324   10:43:51	-- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:06:02.324   10:43:51	-- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:06:02.324   10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.324   10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324   10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.324   10:43:51	-- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:06:02.324   10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.324   10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324   10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.324    10:43:51	-- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:06:02.324    10:43:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:02.324    10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324    10:43:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:02.324   10:43:51	-- rpc/rpc.sh@25 -- # bdevs='[]'
00:06:02.324    10:43:51	-- rpc/rpc.sh@26 -- # jq length
00:06:02.324   10:43:51	-- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:06:02.324  
00:06:02.324  real	0m0.296s
00:06:02.324  user	0m0.189s
00:06:02.324  sys	0m0.049s
00:06:02.324   10:43:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:02.324   10:43:51	-- common/autotest_common.sh@10 -- # set +x
00:06:02.324  ************************************
00:06:02.324  END TEST rpc_daemon_integrity
00:06:02.324  ************************************
00:06:02.584   10:43:51	-- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:06:02.584   10:43:51	-- rpc/rpc.sh@84 -- # killprocess 2078948
00:06:02.584   10:43:51	-- common/autotest_common.sh@936 -- # '[' -z 2078948 ']'
00:06:02.584   10:43:51	-- common/autotest_common.sh@940 -- # kill -0 2078948
00:06:02.584    10:43:51	-- common/autotest_common.sh@941 -- # uname
00:06:02.584   10:43:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:02.584    10:43:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2078948
00:06:02.584   10:43:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:02.584   10:43:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:02.584   10:43:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2078948'
00:06:02.584  killing process with pid 2078948
00:06:02.584   10:43:51	-- common/autotest_common.sh@955 -- # kill 2078948
00:06:02.584   10:43:51	-- common/autotest_common.sh@960 -- # wait 2078948
00:06:03.153  
00:06:03.153  real	0m3.273s
00:06:03.153  user	0m4.144s
00:06:03.153  sys	0m1.000s
00:06:03.153   10:43:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:03.153   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.153  ************************************
00:06:03.153  END TEST rpc
00:06:03.153  ************************************
00:06:03.153   10:43:52	-- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:06:03.153   10:43:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:03.153   10:43:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:03.153   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.153  ************************************
00:06:03.153  START TEST rpc_client
00:06:03.153  ************************************
00:06:03.153   10:43:52	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:06:03.153  * Looking for test storage...
00:06:03.153  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client
00:06:03.153    10:43:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:03.153     10:43:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:03.153     10:43:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:03.412    10:43:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:03.412    10:43:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:03.412    10:43:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:03.413    10:43:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:03.413    10:43:52	-- scripts/common.sh@335 -- # IFS=.-:
00:06:03.413    10:43:52	-- scripts/common.sh@335 -- # read -ra ver1
00:06:03.413    10:43:52	-- scripts/common.sh@336 -- # IFS=.-:
00:06:03.413    10:43:52	-- scripts/common.sh@336 -- # read -ra ver2
00:06:03.413    10:43:52	-- scripts/common.sh@337 -- # local 'op=<'
00:06:03.413    10:43:52	-- scripts/common.sh@339 -- # ver1_l=2
00:06:03.413    10:43:52	-- scripts/common.sh@340 -- # ver2_l=1
00:06:03.413    10:43:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:03.413    10:43:52	-- scripts/common.sh@343 -- # case "$op" in
00:06:03.413    10:43:52	-- scripts/common.sh@344 -- # : 1
00:06:03.413    10:43:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:03.413    10:43:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:03.413     10:43:52	-- scripts/common.sh@364 -- # decimal 1
00:06:03.413     10:43:52	-- scripts/common.sh@352 -- # local d=1
00:06:03.413     10:43:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:03.413     10:43:52	-- scripts/common.sh@354 -- # echo 1
00:06:03.413    10:43:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:03.413     10:43:52	-- scripts/common.sh@365 -- # decimal 2
00:06:03.413     10:43:52	-- scripts/common.sh@352 -- # local d=2
00:06:03.413     10:43:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:03.413     10:43:52	-- scripts/common.sh@354 -- # echo 2
00:06:03.413    10:43:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:03.413    10:43:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:03.413    10:43:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:03.413    10:43:52	-- scripts/common.sh@367 -- # return 0
00:06:03.413    10:43:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:03.413    10:43:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:03.413  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.413  		--rc genhtml_branch_coverage=1
00:06:03.413  		--rc genhtml_function_coverage=1
00:06:03.413  		--rc genhtml_legend=1
00:06:03.413  		--rc geninfo_all_blocks=1
00:06:03.413  		--rc geninfo_unexecuted_blocks=1
00:06:03.413  		
00:06:03.413  		'
00:06:03.413    10:43:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:03.413  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.413  		--rc genhtml_branch_coverage=1
00:06:03.413  		--rc genhtml_function_coverage=1
00:06:03.413  		--rc genhtml_legend=1
00:06:03.413  		--rc geninfo_all_blocks=1
00:06:03.413  		--rc geninfo_unexecuted_blocks=1
00:06:03.413  		
00:06:03.413  		'
00:06:03.413    10:43:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:03.413  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.413  		--rc genhtml_branch_coverage=1
00:06:03.413  		--rc genhtml_function_coverage=1
00:06:03.413  		--rc genhtml_legend=1
00:06:03.413  		--rc geninfo_all_blocks=1
00:06:03.413  		--rc geninfo_unexecuted_blocks=1
00:06:03.413  		
00:06:03.413  		'
00:06:03.413    10:43:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:03.413  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.413  		--rc genhtml_branch_coverage=1
00:06:03.413  		--rc genhtml_function_coverage=1
00:06:03.413  		--rc genhtml_legend=1
00:06:03.413  		--rc geninfo_all_blocks=1
00:06:03.413  		--rc geninfo_unexecuted_blocks=1
00:06:03.413  		
00:06:03.413  		'
00:06:03.413   10:43:52	-- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:06:03.413  OK
00:06:03.413   10:43:52	-- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:06:03.413  
00:06:03.413  real	0m0.231s
00:06:03.413  user	0m0.129s
00:06:03.413  sys	0m0.119s
00:06:03.413   10:43:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:03.413   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.413  ************************************
00:06:03.413  END TEST rpc_client
00:06:03.413  ************************************
00:06:03.413   10:43:52	-- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh
00:06:03.413   10:43:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:03.413   10:43:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:03.413   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.413  ************************************
00:06:03.413  START TEST json_config
00:06:03.413  ************************************
00:06:03.413   10:43:52	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config.sh
00:06:03.413    10:43:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:03.413     10:43:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:03.413     10:43:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:03.672    10:43:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:03.672    10:43:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:03.672    10:43:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:03.672    10:43:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:03.672    10:43:52	-- scripts/common.sh@335 -- # IFS=.-:
00:06:03.672    10:43:52	-- scripts/common.sh@335 -- # read -ra ver1
00:06:03.672    10:43:52	-- scripts/common.sh@336 -- # IFS=.-:
00:06:03.672    10:43:52	-- scripts/common.sh@336 -- # read -ra ver2
00:06:03.672    10:43:52	-- scripts/common.sh@337 -- # local 'op=<'
00:06:03.672    10:43:52	-- scripts/common.sh@339 -- # ver1_l=2
00:06:03.672    10:43:52	-- scripts/common.sh@340 -- # ver2_l=1
00:06:03.672    10:43:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:03.672    10:43:52	-- scripts/common.sh@343 -- # case "$op" in
00:06:03.672    10:43:52	-- scripts/common.sh@344 -- # : 1
00:06:03.672    10:43:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:03.673    10:43:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:03.673     10:43:52	-- scripts/common.sh@364 -- # decimal 1
00:06:03.673     10:43:52	-- scripts/common.sh@352 -- # local d=1
00:06:03.673     10:43:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:03.673     10:43:52	-- scripts/common.sh@354 -- # echo 1
00:06:03.673    10:43:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:03.673     10:43:52	-- scripts/common.sh@365 -- # decimal 2
00:06:03.673     10:43:52	-- scripts/common.sh@352 -- # local d=2
00:06:03.673     10:43:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:03.673     10:43:52	-- scripts/common.sh@354 -- # echo 2
00:06:03.673    10:43:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:03.673    10:43:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:03.673    10:43:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:03.673    10:43:52	-- scripts/common.sh@367 -- # return 0
00:06:03.673    10:43:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:03.673    10:43:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:03.673  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.673  		--rc genhtml_branch_coverage=1
00:06:03.673  		--rc genhtml_function_coverage=1
00:06:03.673  		--rc genhtml_legend=1
00:06:03.673  		--rc geninfo_all_blocks=1
00:06:03.673  		--rc geninfo_unexecuted_blocks=1
00:06:03.673  		
00:06:03.673  		'
00:06:03.673    10:43:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:03.673  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.673  		--rc genhtml_branch_coverage=1
00:06:03.673  		--rc genhtml_function_coverage=1
00:06:03.673  		--rc genhtml_legend=1
00:06:03.673  		--rc geninfo_all_blocks=1
00:06:03.673  		--rc geninfo_unexecuted_blocks=1
00:06:03.673  		
00:06:03.673  		'
00:06:03.673    10:43:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:03.673  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.673  		--rc genhtml_branch_coverage=1
00:06:03.673  		--rc genhtml_function_coverage=1
00:06:03.673  		--rc genhtml_legend=1
00:06:03.673  		--rc geninfo_all_blocks=1
00:06:03.673  		--rc geninfo_unexecuted_blocks=1
00:06:03.673  		
00:06:03.673  		'
00:06:03.673    10:43:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:03.673  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.673  		--rc genhtml_branch_coverage=1
00:06:03.673  		--rc genhtml_function_coverage=1
00:06:03.673  		--rc genhtml_legend=1
00:06:03.673  		--rc geninfo_all_blocks=1
00:06:03.673  		--rc geninfo_unexecuted_blocks=1
00:06:03.673  		
00:06:03.673  		'
00:06:03.673   10:43:52	-- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:06:03.673     10:43:52	-- nvmf/common.sh@7 -- # uname -s
00:06:03.673    10:43:52	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:03.673    10:43:52	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:03.673    10:43:52	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:03.673    10:43:52	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:03.673    10:43:52	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:03.673    10:43:52	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:03.673    10:43:52	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:03.673    10:43:52	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:03.673    10:43:52	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:03.673     10:43:52	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:03.673    10:43:52	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:06:03.673    10:43:52	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:06:03.673    10:43:52	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:03.673    10:43:52	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:03.673    10:43:52	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:03.673    10:43:52	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:06:03.673     10:43:52	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:03.673     10:43:52	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:03.673     10:43:52	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:03.673      10:43:52	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.673      10:43:52	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.673      10:43:52	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.673      10:43:52	-- paths/export.sh@5 -- # export PATH
00:06:03.673      10:43:52	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.673    10:43:52	-- nvmf/common.sh@46 -- # : 0
00:06:03.673    10:43:52	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:06:03.673    10:43:52	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:06:03.673    10:43:52	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:06:03.673    10:43:52	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:03.673    10:43:52	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:03.673    10:43:52	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:06:03.673    10:43:52	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:06:03.673    10:43:52	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:06:03.673   10:43:52	-- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]]
00:06:03.673   10:43:52	-- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]]
00:06:03.673   10:43:52	-- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]]
00:06:03.673   10:43:52	-- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:06:03.673   10:43:52	-- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests'
00:06:03.673  WARNING: No tests are enabled so not running JSON configuration tests
00:06:03.673   10:43:52	-- json_config/json_config.sh@27 -- # exit 0
00:06:03.673  
00:06:03.673  real	0m0.207s
00:06:03.673  user	0m0.130s
00:06:03.673  sys	0m0.086s
00:06:03.673   10:43:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:03.673   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.673  ************************************
00:06:03.673  END TEST json_config
00:06:03.673  ************************************
00:06:03.673   10:43:52	-- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:06:03.673   10:43:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:03.673   10:43:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:03.673   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.673  ************************************
00:06:03.673  START TEST json_config_extra_key
00:06:03.673  ************************************
00:06:03.673   10:43:52	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:06:03.673    10:43:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:03.673     10:43:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:03.673     10:43:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:03.933    10:43:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:03.933    10:43:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:03.933    10:43:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:03.933    10:43:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:03.933    10:43:52	-- scripts/common.sh@335 -- # IFS=.-:
00:06:03.933    10:43:52	-- scripts/common.sh@335 -- # read -ra ver1
00:06:03.933    10:43:52	-- scripts/common.sh@336 -- # IFS=.-:
00:06:03.933    10:43:52	-- scripts/common.sh@336 -- # read -ra ver2
00:06:03.933    10:43:52	-- scripts/common.sh@337 -- # local 'op=<'
00:06:03.933    10:43:52	-- scripts/common.sh@339 -- # ver1_l=2
00:06:03.933    10:43:52	-- scripts/common.sh@340 -- # ver2_l=1
00:06:03.933    10:43:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:03.933    10:43:52	-- scripts/common.sh@343 -- # case "$op" in
00:06:03.933    10:43:52	-- scripts/common.sh@344 -- # : 1
00:06:03.933    10:43:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:03.933    10:43:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:03.933     10:43:52	-- scripts/common.sh@364 -- # decimal 1
00:06:03.933     10:43:52	-- scripts/common.sh@352 -- # local d=1
00:06:03.933     10:43:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:03.933     10:43:52	-- scripts/common.sh@354 -- # echo 1
00:06:03.933    10:43:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:03.933     10:43:52	-- scripts/common.sh@365 -- # decimal 2
00:06:03.933     10:43:52	-- scripts/common.sh@352 -- # local d=2
00:06:03.933     10:43:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:03.933     10:43:52	-- scripts/common.sh@354 -- # echo 2
00:06:03.933    10:43:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:03.933    10:43:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:03.933    10:43:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:03.933    10:43:52	-- scripts/common.sh@367 -- # return 0
00:06:03.933    10:43:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:03.933    10:43:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:03.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.933  		--rc genhtml_branch_coverage=1
00:06:03.933  		--rc genhtml_function_coverage=1
00:06:03.933  		--rc genhtml_legend=1
00:06:03.933  		--rc geninfo_all_blocks=1
00:06:03.933  		--rc geninfo_unexecuted_blocks=1
00:06:03.933  		
00:06:03.933  		'
00:06:03.933    10:43:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:03.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.933  		--rc genhtml_branch_coverage=1
00:06:03.933  		--rc genhtml_function_coverage=1
00:06:03.933  		--rc genhtml_legend=1
00:06:03.933  		--rc geninfo_all_blocks=1
00:06:03.933  		--rc geninfo_unexecuted_blocks=1
00:06:03.933  		
00:06:03.933  		'
00:06:03.933    10:43:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:03.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.933  		--rc genhtml_branch_coverage=1
00:06:03.933  		--rc genhtml_function_coverage=1
00:06:03.933  		--rc genhtml_legend=1
00:06:03.933  		--rc geninfo_all_blocks=1
00:06:03.933  		--rc geninfo_unexecuted_blocks=1
00:06:03.933  		
00:06:03.933  		'
00:06:03.933    10:43:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:03.933  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:03.933  		--rc genhtml_branch_coverage=1
00:06:03.933  		--rc genhtml_function_coverage=1
00:06:03.933  		--rc genhtml_legend=1
00:06:03.933  		--rc geninfo_all_blocks=1
00:06:03.933  		--rc geninfo_unexecuted_blocks=1
00:06:03.933  		
00:06:03.933  		'
00:06:03.933   10:43:52	-- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvmf/common.sh
00:06:03.933     10:43:52	-- nvmf/common.sh@7 -- # uname -s
00:06:03.933    10:43:52	-- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:06:03.934    10:43:52	-- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:06:03.934    10:43:52	-- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:06:03.934    10:43:52	-- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:06:03.934    10:43:52	-- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:06:03.934    10:43:52	-- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:06:03.934    10:43:52	-- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:06:03.934    10:43:52	-- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:06:03.934    10:43:52	-- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:06:03.934     10:43:52	-- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:06:03.934    10:43:52	-- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e
00:06:03.934    10:43:52	-- nvmf/common.sh@18 -- # NVME_HOSTID=00067ae0-6ec8-e711-906e-00163566263e
00:06:03.934    10:43:52	-- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:06:03.934    10:43:52	-- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:06:03.934    10:43:52	-- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:06:03.934    10:43:52	-- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:06:03.934     10:43:52	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:06:03.934     10:43:52	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:03.934     10:43:52	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:03.934      10:43:52	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.934      10:43:52	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.934      10:43:52	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.934      10:43:52	-- paths/export.sh@5 -- # export PATH
00:06:03.934      10:43:52	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:03.934    10:43:52	-- nvmf/common.sh@46 -- # : 0
00:06:03.934    10:43:52	-- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID
00:06:03.934    10:43:52	-- nvmf/common.sh@48 -- # build_nvmf_app_args
00:06:03.934    10:43:52	-- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']'
00:06:03.934    10:43:52	-- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:06:03.934    10:43:52	-- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:06:03.934    10:43:52	-- nvmf/common.sh@32 -- # '[' -n '' ']'
00:06:03.934    10:43:52	-- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']'
00:06:03.934    10:43:52	-- nvmf/common.sh@50 -- # have_pci_nics=0
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='')
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024')
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@18 -- # declare -A app_params
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json')
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...'
00:06:03.934  INFO: launching applications...
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@24 -- # local app=target
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@25 -- # shift
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]]
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]]
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2079739
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...'
00:06:03.934  Waiting for target to run...
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2079739 /var/tmp/spdk_tgt.sock
00:06:03.934   10:43:52	-- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/extra_key.json
00:06:03.934   10:43:52	-- common/autotest_common.sh@829 -- # '[' -z 2079739 ']'
00:06:03.934   10:43:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:06:03.934   10:43:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:03.934   10:43:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:06:03.934  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:06:03.934   10:43:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:03.934   10:43:52	-- common/autotest_common.sh@10 -- # set +x
00:06:03.934  [2024-12-15 10:43:52.845383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:03.934  [2024-12-15 10:43:52.845459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079739 ]
00:06:03.934  EAL: No free 2048 kB hugepages reported on node 1
00:06:04.502  [2024-12-15 10:43:53.430763] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:04.761  [2024-12-15 10:43:53.540816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:04.761  [2024-12-15 10:43:53.540972] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:04.761  [2024-12-15 10:43:53.610200] 'OCF_Core' volume operations registered
00:06:04.761  [2024-12-15 10:43:53.613365] 'OCF_Cache' volume operations registered
00:06:04.761  [2024-12-15 10:43:53.616268] 'OCF Composite' volume operations registered
00:06:04.761  [2024-12-15 10:43:53.619317] 'SPDK_block_device' volume operations registered
00:06:05.019   10:43:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:05.019   10:43:53	-- common/autotest_common.sh@862 -- # return 0
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@35 -- # echo ''
00:06:05.019  
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...'
00:06:05.019  INFO: shutting down applications...
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@40 -- # local app=target
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]]
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@44 -- # [[ -n 2079739 ]]
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2079739
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@49 -- # (( i = 0 ))
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@50 -- # kill -0 2079739
00:06:05.019   10:43:53	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:06:05.588   10:43:54	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:06:05.588   10:43:54	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:06:05.588   10:43:54	-- json_config/json_config_extra_key.sh@50 -- # kill -0 2079739
00:06:05.589   10:43:54	-- json_config/json_config_extra_key.sh@54 -- # sleep 0.5
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@49 -- # (( i++ ))
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@49 -- # (( i < 30 ))
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@50 -- # kill -0 2079739
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]=
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@52 -- # break
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]]
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done'
00:06:05.848  SPDK target shutdown done
00:06:05.848   10:43:54	-- json_config/json_config_extra_key.sh@82 -- # echo Success
00:06:05.848  Success
00:06:05.848  
00:06:05.848  real	0m2.233s
00:06:05.848  user	0m1.507s
00:06:05.848  sys	0m0.789s
00:06:05.848   10:43:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:05.848   10:43:54	-- common/autotest_common.sh@10 -- # set +x
00:06:05.848  ************************************
00:06:05.848  END TEST json_config_extra_key
00:06:05.848  ************************************
00:06:06.107   10:43:54	-- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:06.107   10:43:54	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:06.107   10:43:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:06.107   10:43:54	-- common/autotest_common.sh@10 -- # set +x
00:06:06.107  ************************************
00:06:06.107  START TEST alias_rpc
00:06:06.107  ************************************
00:06:06.107   10:43:54	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:06:06.107  * Looking for test storage...
00:06:06.107  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/alias_rpc
00:06:06.107    10:43:54	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:06.107     10:43:54	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:06.107     10:43:54	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:06.107    10:43:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:06.107    10:43:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:06.107    10:43:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:06.107    10:43:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:06.107    10:43:55	-- scripts/common.sh@335 -- # IFS=.-:
00:06:06.107    10:43:55	-- scripts/common.sh@335 -- # read -ra ver1
00:06:06.107    10:43:55	-- scripts/common.sh@336 -- # IFS=.-:
00:06:06.107    10:43:55	-- scripts/common.sh@336 -- # read -ra ver2
00:06:06.107    10:43:55	-- scripts/common.sh@337 -- # local 'op=<'
00:06:06.107    10:43:55	-- scripts/common.sh@339 -- # ver1_l=2
00:06:06.107    10:43:55	-- scripts/common.sh@340 -- # ver2_l=1
00:06:06.107    10:43:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:06.107    10:43:55	-- scripts/common.sh@343 -- # case "$op" in
00:06:06.107    10:43:55	-- scripts/common.sh@344 -- # : 1
00:06:06.107    10:43:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:06.107    10:43:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:06.107     10:43:55	-- scripts/common.sh@364 -- # decimal 1
00:06:06.107     10:43:55	-- scripts/common.sh@352 -- # local d=1
00:06:06.107     10:43:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:06.107     10:43:55	-- scripts/common.sh@354 -- # echo 1
00:06:06.107    10:43:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:06.107     10:43:55	-- scripts/common.sh@365 -- # decimal 2
00:06:06.107     10:43:55	-- scripts/common.sh@352 -- # local d=2
00:06:06.107     10:43:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:06.107     10:43:55	-- scripts/common.sh@354 -- # echo 2
00:06:06.107    10:43:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:06.107    10:43:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:06.107    10:43:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:06.107    10:43:55	-- scripts/common.sh@367 -- # return 0
00:06:06.107    10:43:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:06.107    10:43:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:06.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:06.107  		--rc genhtml_branch_coverage=1
00:06:06.107  		--rc genhtml_function_coverage=1
00:06:06.107  		--rc genhtml_legend=1
00:06:06.107  		--rc geninfo_all_blocks=1
00:06:06.107  		--rc geninfo_unexecuted_blocks=1
00:06:06.107  		
00:06:06.107  		'
00:06:06.107    10:43:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:06.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:06.107  		--rc genhtml_branch_coverage=1
00:06:06.107  		--rc genhtml_function_coverage=1
00:06:06.107  		--rc genhtml_legend=1
00:06:06.107  		--rc geninfo_all_blocks=1
00:06:06.107  		--rc geninfo_unexecuted_blocks=1
00:06:06.107  		
00:06:06.107  		'
00:06:06.107    10:43:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:06.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:06.107  		--rc genhtml_branch_coverage=1
00:06:06.107  		--rc genhtml_function_coverage=1
00:06:06.107  		--rc genhtml_legend=1
00:06:06.107  		--rc geninfo_all_blocks=1
00:06:06.107  		--rc geninfo_unexecuted_blocks=1
00:06:06.107  		
00:06:06.107  		'
00:06:06.107    10:43:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:06.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:06.107  		--rc genhtml_branch_coverage=1
00:06:06.107  		--rc genhtml_function_coverage=1
00:06:06.107  		--rc genhtml_legend=1
00:06:06.107  		--rc geninfo_all_blocks=1
00:06:06.107  		--rc geninfo_unexecuted_blocks=1
00:06:06.107  		
00:06:06.107  		'
00:06:06.107   10:43:55	-- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:06:06.107   10:43:55	-- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2079995
00:06:06.107   10:43:55	-- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2079995
00:06:06.107   10:43:55	-- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:06:06.107   10:43:55	-- common/autotest_common.sh@829 -- # '[' -z 2079995 ']'
00:06:06.107   10:43:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:06.107   10:43:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:06.107   10:43:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:06.107  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:06.107   10:43:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:06.107   10:43:55	-- common/autotest_common.sh@10 -- # set +x
00:06:06.365  [2024-12-15 10:43:55.143104] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:06.365  [2024-12-15 10:43:55.143175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2079995 ]
00:06:06.365  EAL: No free 2048 kB hugepages reported on node 1
00:06:06.365  [2024-12-15 10:43:55.240198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:06.365  [2024-12-15 10:43:55.334826] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:06.365  [2024-12-15 10:43:55.334983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:06.624  [2024-12-15 10:43:55.541061] 'OCF_Core' volume operations registered
00:06:06.624  [2024-12-15 10:43:55.544564] 'OCF_Cache' volume operations registered
00:06:06.624  [2024-12-15 10:43:55.548510] 'OCF Composite' volume operations registered
00:06:06.624  [2024-12-15 10:43:55.552087] 'SPDK_block_device' volume operations registered
00:06:07.191   10:43:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:07.191   10:43:56	-- common/autotest_common.sh@862 -- # return 0
00:06:07.191   10:43:56	-- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py load_config -i
00:06:07.450   10:43:56	-- alias_rpc/alias_rpc.sh@19 -- # killprocess 2079995
00:06:07.450   10:43:56	-- common/autotest_common.sh@936 -- # '[' -z 2079995 ']'
00:06:07.450   10:43:56	-- common/autotest_common.sh@940 -- # kill -0 2079995
00:06:07.450    10:43:56	-- common/autotest_common.sh@941 -- # uname
00:06:07.450   10:43:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:07.450    10:43:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2079995
00:06:07.450   10:43:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:07.450   10:43:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:07.450   10:43:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2079995'
00:06:07.450  killing process with pid 2079995
00:06:07.450   10:43:56	-- common/autotest_common.sh@955 -- # kill 2079995
00:06:07.450   10:43:56	-- common/autotest_common.sh@960 -- # wait 2079995
00:06:08.018  
00:06:08.018  real	0m2.050s
00:06:08.018  user	0m2.087s
00:06:08.018  sys	0m0.664s
00:06:08.018   10:43:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:08.018   10:43:56	-- common/autotest_common.sh@10 -- # set +x
00:06:08.018  ************************************
00:06:08.018  END TEST alias_rpc
00:06:08.018  ************************************
00:06:08.018   10:43:56	-- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]]
00:06:08.018   10:43:56	-- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh
00:06:08.018   10:43:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:08.018   10:43:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:08.018   10:43:56	-- common/autotest_common.sh@10 -- # set +x
00:06:08.018  ************************************
00:06:08.018  START TEST spdkcli_tcp
00:06:08.018  ************************************
00:06:08.018   10:43:56	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/tcp.sh
00:06:08.277  * Looking for test storage...
00:06:08.277  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli
00:06:08.277    10:43:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:08.277     10:43:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:08.277     10:43:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:08.277    10:43:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:08.277    10:43:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:08.277    10:43:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:08.277    10:43:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:08.277    10:43:57	-- scripts/common.sh@335 -- # IFS=.-:
00:06:08.277    10:43:57	-- scripts/common.sh@335 -- # read -ra ver1
00:06:08.277    10:43:57	-- scripts/common.sh@336 -- # IFS=.-:
00:06:08.277    10:43:57	-- scripts/common.sh@336 -- # read -ra ver2
00:06:08.277    10:43:57	-- scripts/common.sh@337 -- # local 'op=<'
00:06:08.277    10:43:57	-- scripts/common.sh@339 -- # ver1_l=2
00:06:08.277    10:43:57	-- scripts/common.sh@340 -- # ver2_l=1
00:06:08.277    10:43:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:08.277    10:43:57	-- scripts/common.sh@343 -- # case "$op" in
00:06:08.277    10:43:57	-- scripts/common.sh@344 -- # : 1
00:06:08.277    10:43:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:08.277    10:43:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:08.277     10:43:57	-- scripts/common.sh@364 -- # decimal 1
00:06:08.277     10:43:57	-- scripts/common.sh@352 -- # local d=1
00:06:08.277     10:43:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:08.277     10:43:57	-- scripts/common.sh@354 -- # echo 1
00:06:08.277    10:43:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:08.277     10:43:57	-- scripts/common.sh@365 -- # decimal 2
00:06:08.277     10:43:57	-- scripts/common.sh@352 -- # local d=2
00:06:08.277     10:43:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:08.277     10:43:57	-- scripts/common.sh@354 -- # echo 2
00:06:08.277    10:43:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:08.277    10:43:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:08.277    10:43:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:08.277    10:43:57	-- scripts/common.sh@367 -- # return 0
00:06:08.277    10:43:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:08.277    10:43:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:08.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:08.277  		--rc genhtml_branch_coverage=1
00:06:08.277  		--rc genhtml_function_coverage=1
00:06:08.277  		--rc genhtml_legend=1
00:06:08.277  		--rc geninfo_all_blocks=1
00:06:08.277  		--rc geninfo_unexecuted_blocks=1
00:06:08.277  		
00:06:08.277  		'
00:06:08.277    10:43:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:08.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:08.277  		--rc genhtml_branch_coverage=1
00:06:08.277  		--rc genhtml_function_coverage=1
00:06:08.277  		--rc genhtml_legend=1
00:06:08.277  		--rc geninfo_all_blocks=1
00:06:08.277  		--rc geninfo_unexecuted_blocks=1
00:06:08.277  		
00:06:08.277  		'
00:06:08.277    10:43:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:08.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:08.277  		--rc genhtml_branch_coverage=1
00:06:08.277  		--rc genhtml_function_coverage=1
00:06:08.277  		--rc genhtml_legend=1
00:06:08.277  		--rc geninfo_all_blocks=1
00:06:08.277  		--rc geninfo_unexecuted_blocks=1
00:06:08.277  		
00:06:08.277  		'
00:06:08.277    10:43:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:08.277  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:08.277  		--rc genhtml_branch_coverage=1
00:06:08.277  		--rc genhtml_function_coverage=1
00:06:08.277  		--rc genhtml_legend=1
00:06:08.277  		--rc geninfo_all_blocks=1
00:06:08.277  		--rc geninfo_unexecuted_blocks=1
00:06:08.277  		
00:06:08.277  		'
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/common.sh
00:06:08.277    10:43:57	-- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:06:08.277    10:43:57	-- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/json_config/clear_config.py
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@19 -- # PORT=9998
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:06:08.277   10:43:57	-- common/autotest_common.sh@722 -- # xtrace_disable
00:06:08.277   10:43:57	-- common/autotest_common.sh@10 -- # set +x
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2080402
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@27 -- # waitforlisten 2080402
00:06:08.277   10:43:57	-- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:06:08.277   10:43:57	-- common/autotest_common.sh@829 -- # '[' -z 2080402 ']'
00:06:08.277   10:43:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:08.277   10:43:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:08.277   10:43:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:08.277  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:08.277   10:43:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:08.277   10:43:57	-- common/autotest_common.sh@10 -- # set +x
00:06:08.277  [2024-12-15 10:43:57.254186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:08.277  [2024-12-15 10:43:57.254260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080402 ]
00:06:08.536  EAL: No free 2048 kB hugepages reported on node 1
00:06:08.536  [2024-12-15 10:43:57.352976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:08.536  [2024-12-15 10:43:57.451819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:08.536  [2024-12-15 10:43:57.452023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:08.536  [2024-12-15 10:43:57.452024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:08.795  [2024-12-15 10:43:57.634179] 'OCF_Core' volume operations registered
00:06:08.795  [2024-12-15 10:43:57.637453] 'OCF_Cache' volume operations registered
00:06:08.795  [2024-12-15 10:43:57.641107] 'OCF Composite' volume operations registered
00:06:08.795  [2024-12-15 10:43:57.644377] 'SPDK_block_device' volume operations registered
00:06:09.363   10:43:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:09.363   10:43:58	-- common/autotest_common.sh@862 -- # return 0
00:06:09.363   10:43:58	-- spdkcli/tcp.sh@31 -- # socat_pid=2080585
00:06:09.363   10:43:58	-- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:06:09.363   10:43:58	-- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:06:09.622  [
00:06:09.622    "bdev_malloc_delete",
00:06:09.622    "bdev_malloc_create",
00:06:09.622    "bdev_null_resize",
00:06:09.622    "bdev_null_delete",
00:06:09.622    "bdev_null_create",
00:06:09.622    "bdev_nvme_cuse_unregister",
00:06:09.622    "bdev_nvme_cuse_register",
00:06:09.622    "bdev_opal_new_user",
00:06:09.622    "bdev_opal_set_lock_state",
00:06:09.622    "bdev_opal_delete",
00:06:09.622    "bdev_opal_get_info",
00:06:09.622    "bdev_opal_create",
00:06:09.622    "bdev_nvme_opal_revert",
00:06:09.622    "bdev_nvme_opal_init",
00:06:09.622    "bdev_nvme_send_cmd",
00:06:09.622    "bdev_nvme_get_path_iostat",
00:06:09.622    "bdev_nvme_get_mdns_discovery_info",
00:06:09.622    "bdev_nvme_stop_mdns_discovery",
00:06:09.622    "bdev_nvme_start_mdns_discovery",
00:06:09.622    "bdev_nvme_set_multipath_policy",
00:06:09.622    "bdev_nvme_set_preferred_path",
00:06:09.622    "bdev_nvme_get_io_paths",
00:06:09.622    "bdev_nvme_remove_error_injection",
00:06:09.622    "bdev_nvme_add_error_injection",
00:06:09.622    "bdev_nvme_get_discovery_info",
00:06:09.622    "bdev_nvme_stop_discovery",
00:06:09.622    "bdev_nvme_start_discovery",
00:06:09.622    "bdev_nvme_get_controller_health_info",
00:06:09.622    "bdev_nvme_disable_controller",
00:06:09.622    "bdev_nvme_enable_controller",
00:06:09.622    "bdev_nvme_reset_controller",
00:06:09.622    "bdev_nvme_get_transport_statistics",
00:06:09.622    "bdev_nvme_apply_firmware",
00:06:09.622    "bdev_nvme_detach_controller",
00:06:09.622    "bdev_nvme_get_controllers",
00:06:09.622    "bdev_nvme_attach_controller",
00:06:09.622    "bdev_nvme_set_hotplug",
00:06:09.622    "bdev_nvme_set_options",
00:06:09.623    "bdev_passthru_delete",
00:06:09.623    "bdev_passthru_create",
00:06:09.623    "bdev_lvol_grow_lvstore",
00:06:09.623    "bdev_lvol_get_lvols",
00:06:09.623    "bdev_lvol_get_lvstores",
00:06:09.623    "bdev_lvol_delete",
00:06:09.623    "bdev_lvol_set_read_only",
00:06:09.623    "bdev_lvol_resize",
00:06:09.623    "bdev_lvol_decouple_parent",
00:06:09.623    "bdev_lvol_inflate",
00:06:09.623    "bdev_lvol_rename",
00:06:09.623    "bdev_lvol_clone_bdev",
00:06:09.623    "bdev_lvol_clone",
00:06:09.623    "bdev_lvol_snapshot",
00:06:09.623    "bdev_lvol_create",
00:06:09.623    "bdev_lvol_delete_lvstore",
00:06:09.623    "bdev_lvol_rename_lvstore",
00:06:09.623    "bdev_lvol_create_lvstore",
00:06:09.623    "bdev_raid_set_options",
00:06:09.623    "bdev_raid_remove_base_bdev",
00:06:09.623    "bdev_raid_add_base_bdev",
00:06:09.623    "bdev_raid_delete",
00:06:09.623    "bdev_raid_create",
00:06:09.623    "bdev_raid_get_bdevs",
00:06:09.623    "bdev_error_inject_error",
00:06:09.623    "bdev_error_delete",
00:06:09.623    "bdev_error_create",
00:06:09.623    "bdev_split_delete",
00:06:09.623    "bdev_split_create",
00:06:09.623    "bdev_delay_delete",
00:06:09.623    "bdev_delay_create",
00:06:09.623    "bdev_delay_update_latency",
00:06:09.623    "bdev_zone_block_delete",
00:06:09.623    "bdev_zone_block_create",
00:06:09.623    "blobfs_create",
00:06:09.623    "blobfs_detect",
00:06:09.623    "blobfs_set_cache_size",
00:06:09.623    "bdev_ocf_flush_status",
00:06:09.623    "bdev_ocf_flush_start",
00:06:09.623    "bdev_ocf_set_seqcutoff",
00:06:09.623    "bdev_ocf_set_cache_mode",
00:06:09.623    "bdev_ocf_get_bdevs",
00:06:09.623    "bdev_ocf_reset_stats",
00:06:09.623    "bdev_ocf_get_stats",
00:06:09.623    "bdev_ocf_delete",
00:06:09.623    "bdev_ocf_create",
00:06:09.623    "bdev_aio_delete",
00:06:09.623    "bdev_aio_rescan",
00:06:09.623    "bdev_aio_create",
00:06:09.623    "bdev_ftl_set_property",
00:06:09.623    "bdev_ftl_get_properties",
00:06:09.623    "bdev_ftl_get_stats",
00:06:09.623    "bdev_ftl_unmap",
00:06:09.623    "bdev_ftl_unload",
00:06:09.623    "bdev_ftl_delete",
00:06:09.623    "bdev_ftl_load",
00:06:09.623    "bdev_ftl_create",
00:06:09.623    "bdev_virtio_attach_controller",
00:06:09.623    "bdev_virtio_scsi_get_devices",
00:06:09.623    "bdev_virtio_detach_controller",
00:06:09.623    "bdev_virtio_blk_set_hotplug",
00:06:09.623    "bdev_iscsi_delete",
00:06:09.623    "bdev_iscsi_create",
00:06:09.623    "bdev_iscsi_set_options",
00:06:09.623    "accel_error_inject_error",
00:06:09.623    "ioat_scan_accel_module",
00:06:09.623    "dsa_scan_accel_module",
00:06:09.623    "iaa_scan_accel_module",
00:06:09.623    "iscsi_set_options",
00:06:09.623    "iscsi_get_auth_groups",
00:06:09.623    "iscsi_auth_group_remove_secret",
00:06:09.623    "iscsi_auth_group_add_secret",
00:06:09.623    "iscsi_delete_auth_group",
00:06:09.623    "iscsi_create_auth_group",
00:06:09.623    "iscsi_set_discovery_auth",
00:06:09.623    "iscsi_get_options",
00:06:09.623    "iscsi_target_node_request_logout",
00:06:09.623    "iscsi_target_node_set_redirect",
00:06:09.623    "iscsi_target_node_set_auth",
00:06:09.623    "iscsi_target_node_add_lun",
00:06:09.623    "iscsi_get_connections",
00:06:09.623    "iscsi_portal_group_set_auth",
00:06:09.623    "iscsi_start_portal_group",
00:06:09.623    "iscsi_delete_portal_group",
00:06:09.623    "iscsi_create_portal_group",
00:06:09.623    "iscsi_get_portal_groups",
00:06:09.623    "iscsi_delete_target_node",
00:06:09.623    "iscsi_target_node_remove_pg_ig_maps",
00:06:09.623    "iscsi_target_node_add_pg_ig_maps",
00:06:09.623    "iscsi_create_target_node",
00:06:09.623    "iscsi_get_target_nodes",
00:06:09.623    "iscsi_delete_initiator_group",
00:06:09.623    "iscsi_initiator_group_remove_initiators",
00:06:09.623    "iscsi_initiator_group_add_initiators",
00:06:09.623    "iscsi_create_initiator_group",
00:06:09.623    "iscsi_get_initiator_groups",
00:06:09.623    "nvmf_set_crdt",
00:06:09.623    "nvmf_set_config",
00:06:09.623    "nvmf_set_max_subsystems",
00:06:09.623    "nvmf_subsystem_get_listeners",
00:06:09.623    "nvmf_subsystem_get_qpairs",
00:06:09.623    "nvmf_subsystem_get_controllers",
00:06:09.623    "nvmf_get_stats",
00:06:09.623    "nvmf_get_transports",
00:06:09.623    "nvmf_create_transport",
00:06:09.623    "nvmf_get_targets",
00:06:09.623    "nvmf_delete_target",
00:06:09.623    "nvmf_create_target",
00:06:09.623    "nvmf_subsystem_allow_any_host",
00:06:09.623    "nvmf_subsystem_remove_host",
00:06:09.623    "nvmf_subsystem_add_host",
00:06:09.623    "nvmf_subsystem_remove_ns",
00:06:09.623    "nvmf_subsystem_add_ns",
00:06:09.623    "nvmf_subsystem_listener_set_ana_state",
00:06:09.623    "nvmf_discovery_get_referrals",
00:06:09.623    "nvmf_discovery_remove_referral",
00:06:09.623    "nvmf_discovery_add_referral",
00:06:09.623    "nvmf_subsystem_remove_listener",
00:06:09.623    "nvmf_subsystem_add_listener",
00:06:09.623    "nvmf_delete_subsystem",
00:06:09.623    "nvmf_create_subsystem",
00:06:09.623    "nvmf_get_subsystems",
00:06:09.623    "env_dpdk_get_mem_stats",
00:06:09.623    "nbd_get_disks",
00:06:09.623    "nbd_stop_disk",
00:06:09.623    "nbd_start_disk",
00:06:09.623    "ublk_recover_disk",
00:06:09.623    "ublk_get_disks",
00:06:09.623    "ublk_stop_disk",
00:06:09.623    "ublk_start_disk",
00:06:09.623    "ublk_destroy_target",
00:06:09.623    "ublk_create_target",
00:06:09.623    "virtio_blk_create_transport",
00:06:09.623    "virtio_blk_get_transports",
00:06:09.623    "vhost_controller_set_coalescing",
00:06:09.623    "vhost_get_controllers",
00:06:09.623    "vhost_delete_controller",
00:06:09.623    "vhost_create_blk_controller",
00:06:09.623    "vhost_scsi_controller_remove_target",
00:06:09.623    "vhost_scsi_controller_add_target",
00:06:09.623    "vhost_start_scsi_controller",
00:06:09.623    "vhost_create_scsi_controller",
00:06:09.623    "thread_set_cpumask",
00:06:09.623    "framework_get_scheduler",
00:06:09.623    "framework_set_scheduler",
00:06:09.623    "framework_get_reactors",
00:06:09.623    "thread_get_io_channels",
00:06:09.623    "thread_get_pollers",
00:06:09.623    "thread_get_stats",
00:06:09.623    "framework_monitor_context_switch",
00:06:09.623    "spdk_kill_instance",
00:06:09.623    "log_enable_timestamps",
00:06:09.623    "log_get_flags",
00:06:09.623    "log_clear_flag",
00:06:09.623    "log_set_flag",
00:06:09.623    "log_get_level",
00:06:09.623    "log_set_level",
00:06:09.623    "log_get_print_level",
00:06:09.623    "log_set_print_level",
00:06:09.623    "framework_enable_cpumask_locks",
00:06:09.623    "framework_disable_cpumask_locks",
00:06:09.623    "framework_wait_init",
00:06:09.623    "framework_start_init",
00:06:09.623    "scsi_get_devices",
00:06:09.623    "bdev_get_histogram",
00:06:09.623    "bdev_enable_histogram",
00:06:09.623    "bdev_set_qos_limit",
00:06:09.623    "bdev_set_qd_sampling_period",
00:06:09.623    "bdev_get_bdevs",
00:06:09.623    "bdev_reset_iostat",
00:06:09.623    "bdev_get_iostat",
00:06:09.623    "bdev_examine",
00:06:09.623    "bdev_wait_for_examine",
00:06:09.623    "bdev_set_options",
00:06:09.623    "notify_get_notifications",
00:06:09.623    "notify_get_types",
00:06:09.623    "accel_get_stats",
00:06:09.623    "accel_set_options",
00:06:09.623    "accel_set_driver",
00:06:09.623    "accel_crypto_key_destroy",
00:06:09.623    "accel_crypto_keys_get",
00:06:09.623    "accel_crypto_key_create",
00:06:09.623    "accel_assign_opc",
00:06:09.623    "accel_get_module_info",
00:06:09.623    "accel_get_opc_assignments",
00:06:09.623    "vmd_rescan",
00:06:09.623    "vmd_remove_device",
00:06:09.623    "vmd_enable",
00:06:09.623    "sock_set_default_impl",
00:06:09.623    "sock_impl_set_options",
00:06:09.623    "sock_impl_get_options",
00:06:09.623    "iobuf_get_stats",
00:06:09.623    "iobuf_set_options",
00:06:09.623    "framework_get_pci_devices",
00:06:09.623    "framework_get_config",
00:06:09.623    "framework_get_subsystems",
00:06:09.623    "trace_get_info",
00:06:09.623    "trace_get_tpoint_group_mask",
00:06:09.623    "trace_disable_tpoint_group",
00:06:09.623    "trace_enable_tpoint_group",
00:06:09.623    "trace_clear_tpoint_mask",
00:06:09.623    "trace_set_tpoint_mask",
00:06:09.623    "spdk_get_version",
00:06:09.623    "rpc_get_methods"
00:06:09.623  ]
00:06:09.623   10:43:58	-- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:06:09.623   10:43:58	-- common/autotest_common.sh@728 -- # xtrace_disable
00:06:09.623   10:43:58	-- common/autotest_common.sh@10 -- # set +x
00:06:09.623   10:43:58	-- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:06:09.623   10:43:58	-- spdkcli/tcp.sh@38 -- # killprocess 2080402
00:06:09.623   10:43:58	-- common/autotest_common.sh@936 -- # '[' -z 2080402 ']'
00:06:09.623   10:43:58	-- common/autotest_common.sh@940 -- # kill -0 2080402
00:06:09.623    10:43:58	-- common/autotest_common.sh@941 -- # uname
00:06:09.623   10:43:58	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:09.623    10:43:58	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2080402
00:06:09.883   10:43:58	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:09.883   10:43:58	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:09.883   10:43:58	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2080402'
00:06:09.883  killing process with pid 2080402
00:06:09.883   10:43:58	-- common/autotest_common.sh@955 -- # kill 2080402
00:06:09.883   10:43:58	-- common/autotest_common.sh@960 -- # wait 2080402
00:06:10.453  
00:06:10.453  real	0m2.212s
00:06:10.453  user	0m4.082s
00:06:10.453  sys	0m0.658s
00:06:10.453   10:43:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:10.453   10:43:59	-- common/autotest_common.sh@10 -- # set +x
00:06:10.453  ************************************
00:06:10.453  END TEST spdkcli_tcp
00:06:10.453  ************************************
00:06:10.453   10:43:59	-- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:10.453   10:43:59	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:10.453   10:43:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:10.453   10:43:59	-- common/autotest_common.sh@10 -- # set +x
00:06:10.453  ************************************
00:06:10.453  START TEST dpdk_mem_utility
00:06:10.453  ************************************
00:06:10.453   10:43:59	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:06:10.453  * Looking for test storage...
00:06:10.453  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/dpdk_memory_utility
00:06:10.453    10:43:59	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:10.453     10:43:59	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:10.453     10:43:59	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:10.453    10:43:59	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:10.453    10:43:59	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:10.453    10:43:59	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:10.453    10:43:59	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:10.453    10:43:59	-- scripts/common.sh@335 -- # IFS=.-:
00:06:10.453    10:43:59	-- scripts/common.sh@335 -- # read -ra ver1
00:06:10.453    10:43:59	-- scripts/common.sh@336 -- # IFS=.-:
00:06:10.453    10:43:59	-- scripts/common.sh@336 -- # read -ra ver2
00:06:10.453    10:43:59	-- scripts/common.sh@337 -- # local 'op=<'
00:06:10.453    10:43:59	-- scripts/common.sh@339 -- # ver1_l=2
00:06:10.453    10:43:59	-- scripts/common.sh@340 -- # ver2_l=1
00:06:10.453    10:43:59	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:10.453    10:43:59	-- scripts/common.sh@343 -- # case "$op" in
00:06:10.453    10:43:59	-- scripts/common.sh@344 -- # : 1
00:06:10.453    10:43:59	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:10.453    10:43:59	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:10.453     10:43:59	-- scripts/common.sh@364 -- # decimal 1
00:06:10.453     10:43:59	-- scripts/common.sh@352 -- # local d=1
00:06:10.453     10:43:59	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:10.453     10:43:59	-- scripts/common.sh@354 -- # echo 1
00:06:10.453    10:43:59	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:10.453     10:43:59	-- scripts/common.sh@365 -- # decimal 2
00:06:10.453     10:43:59	-- scripts/common.sh@352 -- # local d=2
00:06:10.453     10:43:59	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:10.453     10:43:59	-- scripts/common.sh@354 -- # echo 2
00:06:10.453    10:43:59	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:10.453    10:43:59	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:10.453    10:43:59	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:10.453    10:43:59	-- scripts/common.sh@367 -- # return 0
00:06:10.453    10:43:59	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:10.453    10:43:59	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:10.453  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.453  		--rc genhtml_branch_coverage=1
00:06:10.453  		--rc genhtml_function_coverage=1
00:06:10.453  		--rc genhtml_legend=1
00:06:10.453  		--rc geninfo_all_blocks=1
00:06:10.453  		--rc geninfo_unexecuted_blocks=1
00:06:10.453  		
00:06:10.453  		'
00:06:10.454    10:43:59	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:10.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.454  		--rc genhtml_branch_coverage=1
00:06:10.454  		--rc genhtml_function_coverage=1
00:06:10.454  		--rc genhtml_legend=1
00:06:10.454  		--rc geninfo_all_blocks=1
00:06:10.454  		--rc geninfo_unexecuted_blocks=1
00:06:10.454  		
00:06:10.454  		'
00:06:10.454    10:43:59	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:10.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.454  		--rc genhtml_branch_coverage=1
00:06:10.454  		--rc genhtml_function_coverage=1
00:06:10.454  		--rc genhtml_legend=1
00:06:10.454  		--rc geninfo_all_blocks=1
00:06:10.454  		--rc geninfo_unexecuted_blocks=1
00:06:10.454  		
00:06:10.454  		'
00:06:10.454    10:43:59	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:10.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:10.454  		--rc genhtml_branch_coverage=1
00:06:10.454  		--rc genhtml_function_coverage=1
00:06:10.454  		--rc genhtml_legend=1
00:06:10.454  		--rc geninfo_all_blocks=1
00:06:10.454  		--rc geninfo_unexecuted_blocks=1
00:06:10.454  		
00:06:10.454  		'
00:06:10.454   10:43:59	-- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:06:10.454   10:43:59	-- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:06:10.454   10:43:59	-- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2080774
00:06:10.454   10:43:59	-- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2080774
00:06:10.454   10:43:59	-- common/autotest_common.sh@829 -- # '[' -z 2080774 ']'
00:06:10.454   10:43:59	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:10.454   10:43:59	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:10.454   10:43:59	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:10.454  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:10.454   10:43:59	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:10.454   10:43:59	-- common/autotest_common.sh@10 -- # set +x
00:06:10.713  [2024-12-15 10:43:59.489303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:10.713  [2024-12-15 10:43:59.489359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080774 ]
00:06:10.713  EAL: No free 2048 kB hugepages reported on node 1
00:06:10.713  [2024-12-15 10:43:59.582078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:10.713  [2024-12-15 10:43:59.684926] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:10.713  [2024-12-15 10:43:59.685083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:10.973  [2024-12-15 10:43:59.880884] 'OCF_Core' volume operations registered
00:06:10.973  [2024-12-15 10:43:59.884104] 'OCF_Cache' volume operations registered
00:06:10.973  [2024-12-15 10:43:59.887745] 'OCF Composite' volume operations registered
00:06:10.973  [2024-12-15 10:43:59.890958] 'SPDK_block_device' volume operations registered
00:06:11.542   10:44:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:11.542   10:44:00	-- common/autotest_common.sh@862 -- # return 0
00:06:11.542   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:06:11.542   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:06:11.542   10:44:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:11.542   10:44:00	-- common/autotest_common.sh@10 -- # set +x
00:06:11.542  {
00:06:11.542  "filename": "/tmp/spdk_mem_dump.txt"
00:06:11.542  }
00:06:11.542   10:44:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:11.542   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:06:11.542  DPDK memory size 1198.000000 MiB in 1 heap(s)
00:06:11.542  1 heaps totaling size 1198.000000 MiB
00:06:11.542    size: 1198.000000 MiB heap id: 0
00:06:11.542  end heaps----------
00:06:11.542  26 mempools totaling size 954.459290 MiB
00:06:11.542    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:06:11.542    size:  158.602051 MiB name: PDU_data_out_Pool
00:06:11.542    size:   84.521057 MiB name: bdev_io_2080774
00:06:11.542    size:   76.286926 MiB name: ocf_env_12:ocf_mio_8
00:06:11.542    size:   60.174072 MiB name: ocf_env_8:ocf_req_128
00:06:11.542    size:   51.011292 MiB name: evtpool_2080774
00:06:11.542    size:   50.003479 MiB name: msgpool_2080774
00:06:11.542    size:   40.142639 MiB name: ocf_env_11:ocf_mio_4
00:06:11.542    size:   34.164612 MiB name: ocf_env_7:ocf_req_64
00:06:11.542    size:   22.138245 MiB name: ocf_env_6:ocf_req_32
00:06:11.542    size:   22.138245 MiB name: ocf_env_10:ocf_mio_2
00:06:11.542    size:   21.763794 MiB name: PDU_Pool
00:06:11.542    size:   19.513306 MiB name: SCSI_TASK_Pool
00:06:11.542    size:   16.136780 MiB name: ocf_env_5:ocf_req_16
00:06:11.542    size:   14.136292 MiB name: ocf_env_4:ocf_req_8
00:06:11.542    size:   14.136292 MiB name: ocf_env_9:ocf_mio_1
00:06:11.542    size:   12.136414 MiB name: ocf_env_3:ocf_req_4
00:06:11.542    size:   10.135315 MiB name: ocf_env_1:ocf_req_1
00:06:11.542    size:   10.135315 MiB name: ocf_env_2:ocf_req_2
00:06:11.542    size:    8.133545 MiB name: ocf_env_17:OCF Composit
00:06:11.542    size:    6.133728 MiB name: ocf_env_16:OCF_Cache
00:06:11.542    size:    6.133728 MiB name: ocf_env_18:SPDK_block_d
00:06:11.542    size:    1.609375 MiB name: ocf_env_15:ocf_mio_64
00:06:11.542    size:    1.310547 MiB name: ocf_env_14:ocf_mio_32
00:06:11.542    size:    1.161133 MiB name: ocf_env_13:ocf_mio_16
00:06:11.542    size:    0.026123 MiB name: Session_Pool
00:06:11.542  end mempools-------
00:06:11.542  6 memzones totaling size 4.142822 MiB
00:06:11.542    size:    1.000366 MiB name: RG_ring_0_2080774
00:06:11.542    size:    1.000366 MiB name: RG_ring_1_2080774
00:06:11.542    size:    1.000366 MiB name: RG_ring_4_2080774
00:06:11.542    size:    1.000366 MiB name: RG_ring_5_2080774
00:06:11.542    size:    0.125366 MiB name: RG_ring_2_2080774
00:06:11.542    size:    0.015991 MiB name: RG_ring_3_2080774
00:06:11.542  end memzones-------
00:06:11.542   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:06:11.803  heap id: 0 total size: 1198.000000 MiB number of busy elements: 120 number of free elements: 47
00:06:11.803    list of free elements. size: 40.154602 MiB
00:06:11.803      element at address: 0x200030800000 with size:    0.999878 MiB
00:06:11.803      element at address: 0x200030200000 with size:    0.999329 MiB
00:06:11.803      element at address: 0x200030c00000 with size:    0.999329 MiB
00:06:11.803      element at address: 0x20002f800000 with size:    0.998962 MiB
00:06:11.803      element at address: 0x20002f000000 with size:    0.998779 MiB
00:06:11.803      element at address: 0x200018e00000 with size:    0.998718 MiB
00:06:11.803      element at address: 0x200019000000 with size:    0.997375 MiB
00:06:11.803      element at address: 0x200019a00000 with size:    0.997375 MiB
00:06:11.803      element at address: 0x20001b000000 with size:    0.996399 MiB
00:06:11.803      element at address: 0x200024a00000 with size:    0.996399 MiB
00:06:11.803      element at address: 0x200003e00000 with size:    0.996277 MiB
00:06:11.803      element at address: 0x20001a400000 with size:    0.996277 MiB
00:06:11.803      element at address: 0x20001be00000 with size:    0.995911 MiB
00:06:11.803      element at address: 0x20001d000000 with size:    0.994446 MiB
00:06:11.803      element at address: 0x200025a00000 with size:    0.994446 MiB
00:06:11.803      element at address: 0x200049c00000 with size:    0.994446 MiB
00:06:11.803      element at address: 0x200027200000 with size:    0.990051 MiB
00:06:11.803      element at address: 0x20001e800000 with size:    0.968079 MiB
00:06:11.803      element at address: 0x20003fa00000 with size:    0.959961 MiB
00:06:11.803      element at address: 0x200020c00000 with size:    0.958374 MiB
00:06:11.803      element at address: 0x200030a00000 with size:    0.936584 MiB
00:06:11.803      element at address: 0x20001ce00000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x20001e600000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200020a00000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200024800000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200025800000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200027000000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200029a00000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x20002ee00000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x20002f600000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200030000000 with size:    0.866211 MiB
00:06:11.803      element at address: 0x200007000000 with size:    0.866089 MiB
00:06:11.803      element at address: 0x20000b200000 with size:    0.866089 MiB
00:06:11.803      element at address: 0x200000400000 with size:    0.865723 MiB
00:06:11.803      element at address: 0x200000800000 with size:    0.863159 MiB
00:06:11.803      element at address: 0x200029c00000 with size:    0.845764 MiB
00:06:11.803      element at address: 0x200013800000 with size:    0.845581 MiB
00:06:11.803      element at address: 0x200000200000 with size:    0.841614 MiB
00:06:11.803      element at address: 0x20002e800000 with size:    0.837769 MiB
00:06:11.803      element at address: 0x20002ea00000 with size:    0.688354 MiB
00:06:11.803      element at address: 0x200032600000 with size:    0.582886 MiB
00:06:11.803      element at address: 0x200030e00000 with size:    0.490845 MiB
00:06:11.803      element at address: 0x200049a00000 with size:    0.490845 MiB
00:06:11.803      element at address: 0x200031000000 with size:    0.485657 MiB
00:06:11.803      element at address: 0x20003fc00000 with size:    0.410034 MiB
00:06:11.803      element at address: 0x20002ec00000 with size:    0.389160 MiB
00:06:11.803      element at address: 0x200003a00000 with size:    0.355530 MiB
00:06:11.803    list of standard malloc elements. size: 199.233032 MiB
00:06:11.803      element at address: 0x20000b3fff80 with size:  132.000122 MiB
00:06:11.803      element at address: 0x2000071fff80 with size:   64.000122 MiB
00:06:11.803      element at address: 0x200018efff80 with size:    1.000122 MiB
00:06:11.803      element at address: 0x2000308fff80 with size:    1.000122 MiB
00:06:11.803      element at address: 0x200030afff80 with size:    1.000122 MiB
00:06:11.803      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:06:11.803      element at address: 0x200030aeff00 with size:    0.062622 MiB
00:06:11.803      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:06:11.803      element at address: 0x200018effd40 with size:    0.000549 MiB
00:06:11.803      element at address: 0x200030aefdc0 with size:    0.000305 MiB
00:06:11.803      element at address: 0x200018effc40 with size:    0.000244 MiB
00:06:11.803      element at address: 0x200020cf5700 with size:    0.000244 MiB
00:06:11.803      element at address: 0x2000002d7740 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000002d7800 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000002d78c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000002d7ac0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000002d7b80 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000004fdc00 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000008fd180 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003a5b040 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003adb300 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003adb500 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003adf7c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003affa80 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003affb40 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200003eff0c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000070fdd80 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20000b2fdd80 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000138f8980 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200018effac0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200018effb80 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000190ff540 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000190ff600 with size:    0.000183 MiB
00:06:11.803      element at address: 0x2000190ff6c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200019aff540 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200019aff600 with size:    0.000183 MiB
00:06:11.803      element at address: 0x200019aff6c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001a4ff0c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001a4ff180 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001a4ff240 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001b0ff140 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001b0ff200 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001b0ff2c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001befef40 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001beff000 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001beff0c0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001cefde00 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001d0fe940 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001d0fea00 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001d0feac0 with size:    0.000183 MiB
00:06:11.803      element at address: 0x20001e6fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20001e8f7d40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20001e8f7e00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20001e8f7ec0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200020afde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200020cf5580 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200020cf5640 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200020cf5800 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000248fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200024aff140 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200024aff200 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200024aff2c0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000258fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200025afe940 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200025afea00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200025afeac0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000270fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000272fd740 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000272fd800 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000272fd8c0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200029afde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200029cd8840 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200029cd8900 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200029cd89c0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002e8d6780 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002e8d6840 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002e8d6900 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002e8fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002eab0380 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002eab0440 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002eab0500 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002eafde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ec63a00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ec63ac0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ec63b80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ec63c40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ec63d00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002ecfde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002eefde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f0ffb00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f0ffbc0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f0ffc80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f0ffd40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f6fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f8ffbc0 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f8ffc80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f8ffd40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20002f8ffe00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000300fde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000302ffd40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030aefc40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030aefd00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030cffd40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030e7da80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030e7db40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200030efde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x2000310bc740 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200032695380 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200032695440 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fafde00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fc68f80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fc69040 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fc6fc40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fc6fe40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x20003fc6ff00 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200049a7da80 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200049a7db40 with size:    0.000183 MiB
00:06:11.804      element at address: 0x200049afde00 with size:    0.000183 MiB
00:06:11.804    list of memzone associated elements. size: 958.612366 MiB
00:06:11.804      element at address: 0x200032695500 with size:  211.416748 MiB
00:06:11.804        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:06:11.804      element at address: 0x20003fc6ffc0 with size:  157.562561 MiB
00:06:11.804        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:06:11.804      element at address: 0x2000139fab80 with size:   84.020630 MiB
00:06:11.804        associated memzone info: size:   84.020508 MiB name: MP_bdev_io_2080774_0
00:06:11.804      element at address: 0x200029cd8a80 with size:   75.153687 MiB
00:06:11.804        associated memzone info: size:   75.153564 MiB name: MP_ocf_env_12:ocf_mio_8_0
00:06:11.804      element at address: 0x200020cf58c0 with size:   59.040833 MiB
00:06:11.804        associated memzone info: size:   59.040710 MiB name: MP_ocf_env_8:ocf_req_128_0
00:06:11.804      element at address: 0x2000009ff380 with size:   48.003052 MiB
00:06:11.804        associated memzone info: size:   48.002930 MiB name: MP_evtpool_2080774_0
00:06:11.804      element at address: 0x200003fff380 with size:   48.003052 MiB
00:06:11.804        associated memzone info: size:   48.002930 MiB name: MP_msgpool_2080774_0
00:06:11.804      element at address: 0x2000272fd980 with size:   39.009399 MiB
00:06:11.804        associated memzone info: size:   39.009277 MiB name: MP_ocf_env_11:ocf_mio_4_0
00:06:11.804      element at address: 0x20001e8f7f80 with size:   33.031372 MiB
00:06:11.804        associated memzone info: size:   33.031250 MiB name: MP_ocf_env_7:ocf_req_64_0
00:06:11.804      element at address: 0x20001d0feb80 with size:   21.005005 MiB
00:06:11.804        associated memzone info: size:   21.004883 MiB name: MP_ocf_env_6:ocf_req_32_0
00:06:11.804      element at address: 0x200025afeb80 with size:   21.005005 MiB
00:06:11.804        associated memzone info: size:   21.004883 MiB name: MP_ocf_env_10:ocf_mio_2_0
00:06:11.804      element at address: 0x2000311be940 with size:   20.255554 MiB
00:06:11.804        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:06:11.804      element at address: 0x200049dfeb40 with size:   18.005066 MiB
00:06:11.804        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:06:11.804      element at address: 0x20001beff180 with size:   15.003540 MiB
00:06:11.804        associated memzone info: size:   15.003418 MiB name: MP_ocf_env_5:ocf_req_16_0
00:06:11.804      element at address: 0x20001b0ff380 with size:   13.003052 MiB
00:06:11.804        associated memzone info: size:   13.002930 MiB name: MP_ocf_env_4:ocf_req_8_0
00:06:11.804      element at address: 0x200024aff380 with size:   13.003052 MiB
00:06:11.804        associated memzone info: size:   13.002930 MiB name: MP_ocf_env_9:ocf_mio_1_0
00:06:11.804      element at address: 0x20001a4ff300 with size:   11.003174 MiB
00:06:11.804        associated memzone info: size:   11.003052 MiB name: MP_ocf_env_3:ocf_req_4_0
00:06:11.804      element at address: 0x2000190ff780 with size:    9.002075 MiB
00:06:11.804        associated memzone info: size:    9.001953 MiB name: MP_ocf_env_1:ocf_req_1_0
00:06:11.804      element at address: 0x200019aff780 with size:    9.002075 MiB
00:06:11.804        associated memzone info: size:    9.001953 MiB name: MP_ocf_env_2:ocf_req_2_0
00:06:11.804      element at address: 0x20002f8ffec0 with size:    7.000305 MiB
00:06:11.804        associated memzone info: size:    7.000183 MiB name: MP_ocf_env_17:OCF Composit_0
00:06:11.804      element at address: 0x20002f0ffe00 with size:    5.000488 MiB
00:06:11.804        associated memzone info: size:    5.000366 MiB name: MP_ocf_env_16:OCF_Cache_0
00:06:11.804      element at address: 0x2000302ffe00 with size:    5.000488 MiB
00:06:11.804        associated memzone info: size:    5.000366 MiB name: MP_ocf_env_18:SPDK_block_d_0
00:06:11.804      element at address: 0x2000005ffe00 with size:    2.000488 MiB
00:06:11.804        associated memzone info: size:    2.000366 MiB name: RG_MP_evtpool_2080774
00:06:11.804      element at address: 0x200003bffe00 with size:    2.000488 MiB
00:06:11.804        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_2080774
00:06:11.804      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_evtpool_2080774
00:06:11.804      element at address: 0x2000138f8a40 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_1:ocf_req_1
00:06:11.804      element at address: 0x20000b2fde40 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_2:ocf_req_2
00:06:11.804      element at address: 0x2000070fde40 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_3:ocf_req_4
00:06:11.804      element at address: 0x2000008fd240 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_4:ocf_req_8
00:06:11.804      element at address: 0x2000004fdcc0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_5:ocf_req_16
00:06:11.804      element at address: 0x20001cefdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_6:ocf_req_32
00:06:11.804      element at address: 0x20001e6fdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_7:ocf_req_64
00:06:11.804      element at address: 0x200020afdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_8:ocf_req_128
00:06:11.804      element at address: 0x2000248fdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_9:ocf_mio_1
00:06:11.804      element at address: 0x2000258fdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_10:ocf_mio_2
00:06:11.804      element at address: 0x2000270fdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_11:ocf_mio_4
00:06:11.804      element at address: 0x200029afdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_12:ocf_mio_8
00:06:11.804      element at address: 0x20002e8fdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_13:ocf_mio_16
00:06:11.804      element at address: 0x20002eafdec0 with size:    1.008118 MiB
00:06:11.804        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_14:ocf_mio_32
00:06:11.804      element at address: 0x20002ecfdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_15:ocf_mio_64
00:06:11.805      element at address: 0x20002eefdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_16:OCF_Cache
00:06:11.805      element at address: 0x20002f6fdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_17:OCF Composit
00:06:11.805      element at address: 0x2000300fdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_ocf_env_18:SPDK_block_d
00:06:11.805      element at address: 0x200030efdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:06:11.805      element at address: 0x2000310bc800 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:06:11.805      element at address: 0x20003fafdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:06:11.805      element at address: 0x200049afdec0 with size:    1.008118 MiB
00:06:11.805        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:06:11.805      element at address: 0x200003eff180 with size:    1.000488 MiB
00:06:11.805        associated memzone info: size:    1.000366 MiB name: RG_ring_0_2080774
00:06:11.805      element at address: 0x200003affc00 with size:    1.000488 MiB
00:06:11.805        associated memzone info: size:    1.000366 MiB name: RG_ring_1_2080774
00:06:11.805      element at address: 0x200030cffe00 with size:    1.000488 MiB
00:06:11.805        associated memzone info: size:    1.000366 MiB name: RG_ring_4_2080774
00:06:11.805      element at address: 0x200049cfe940 with size:    1.000488 MiB
00:06:11.805        associated memzone info: size:    1.000366 MiB name: RG_ring_5_2080774
00:06:11.805      element at address: 0x20002ec63dc0 with size:    0.600891 MiB
00:06:11.805        associated memzone info: size:    0.600769 MiB name: MP_ocf_env_15:ocf_mio_64_0
00:06:11.805      element at address: 0x200003a5b100 with size:    0.500488 MiB
00:06:11.805        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_2080774
00:06:11.805      element at address: 0x200030e7dc00 with size:    0.500488 MiB
00:06:11.805        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:06:11.805      element at address: 0x200049a7dc00 with size:    0.500488 MiB
00:06:11.805        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:06:11.805      element at address: 0x20002eab05c0 with size:    0.302063 MiB
00:06:11.805        associated memzone info: size:    0.301941 MiB name: MP_ocf_env_14:ocf_mio_32_0
00:06:11.805      element at address: 0x20003107c540 with size:    0.250488 MiB
00:06:11.805        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:06:11.805      element at address: 0x20002e8d69c0 with size:    0.152649 MiB
00:06:11.805        associated memzone info: size:    0.152527 MiB name: MP_ocf_env_13:ocf_mio_16_0
00:06:11.805      element at address: 0x200003adf880 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_ring_2_2080774
00:06:11.805      element at address: 0x2000138d8780 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_1:ocf_req_1
00:06:11.805      element at address: 0x20000b2ddb80 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_2:ocf_req_2
00:06:11.805      element at address: 0x2000070ddb80 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_3:ocf_req_4
00:06:11.805      element at address: 0x2000008dcf80 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_4:ocf_req_8
00:06:11.805      element at address: 0x2000004dda00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_5:ocf_req_16
00:06:11.805      element at address: 0x20001ceddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_6:ocf_req_32
00:06:11.805      element at address: 0x20001e6ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_7:ocf_req_64
00:06:11.805      element at address: 0x200020addc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_8:ocf_req_128
00:06:11.805      element at address: 0x2000248ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_9:ocf_mio_1
00:06:11.805      element at address: 0x2000258ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_10:ocf_mio_2
00:06:11.805      element at address: 0x2000270ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_11:ocf_mio_4
00:06:11.805      element at address: 0x200029addc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_12:ocf_mio_8
00:06:11.805      element at address: 0x20002eeddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_16:OCF_Cache
00:06:11.805      element at address: 0x20002f6ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_17:OCF Composit
00:06:11.805      element at address: 0x2000300ddc00 with size:    0.125488 MiB
00:06:11.805        associated memzone info: size:    0.125366 MiB name: RG_MP_ocf_env_18:SPDK_block_d
00:06:11.805      element at address: 0x20003faf5c00 with size:    0.031738 MiB
00:06:11.805        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:06:11.805      element at address: 0x20003fc69100 with size:    0.023743 MiB
00:06:11.805        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:06:11.805      element at address: 0x200003adb5c0 with size:    0.016113 MiB
00:06:11.805        associated memzone info: size:    0.015991 MiB name: RG_ring_3_2080774
00:06:11.805      element at address: 0x20003fc6f240 with size:    0.002441 MiB
00:06:11.805        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:06:11.805      element at address: 0x20002e8fdb00 with size:    0.000732 MiB
00:06:11.805        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_13:ocf_mio_16
00:06:11.805      element at address: 0x20002eafdb00 with size:    0.000732 MiB
00:06:11.805        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_14:ocf_mio_32
00:06:11.805      element at address: 0x20002ecfdb00 with size:    0.000732 MiB
00:06:11.805        associated memzone info: size:    0.000610 MiB name: RG_MP_ocf_env_15:ocf_mio_64
00:06:11.805      element at address: 0x2000002d7980 with size:    0.000305 MiB
00:06:11.805        associated memzone info: size:    0.000183 MiB name: MP_msgpool_2080774
00:06:11.805      element at address: 0x200003adb3c0 with size:    0.000305 MiB
00:06:11.805        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_2080774
00:06:11.805      element at address: 0x20003fc6fd00 with size:    0.000305 MiB
00:06:11.805        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:06:11.805   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:06:11.805   10:44:00	-- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2080774
00:06:11.805   10:44:00	-- common/autotest_common.sh@936 -- # '[' -z 2080774 ']'
00:06:11.805   10:44:00	-- common/autotest_common.sh@940 -- # kill -0 2080774
00:06:11.805    10:44:00	-- common/autotest_common.sh@941 -- # uname
00:06:11.805   10:44:00	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:11.805    10:44:00	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2080774
00:06:11.805   10:44:00	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:11.805   10:44:00	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:11.805   10:44:00	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2080774'
00:06:11.805  killing process with pid 2080774
00:06:11.805   10:44:00	-- common/autotest_common.sh@955 -- # kill 2080774
00:06:11.805   10:44:00	-- common/autotest_common.sh@960 -- # wait 2080774
00:06:12.375  
00:06:12.375  real	0m2.000s
00:06:12.375  user	0m2.056s
00:06:12.375  sys	0m0.641s
00:06:12.375   10:44:01	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:12.375   10:44:01	-- common/autotest_common.sh@10 -- # set +x
00:06:12.375  ************************************
00:06:12.375  END TEST dpdk_mem_utility
00:06:12.375  ************************************
00:06:12.375   10:44:01	-- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh
00:06:12.375   10:44:01	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:12.375   10:44:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.375   10:44:01	-- common/autotest_common.sh@10 -- # set +x
00:06:12.375  ************************************
00:06:12.375  START TEST event
00:06:12.375  ************************************
00:06:12.375   10:44:01	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event.sh
00:06:12.635  * Looking for test storage...
00:06:12.635  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event
00:06:12.635    10:44:01	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:12.635     10:44:01	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:12.635     10:44:01	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:12.635    10:44:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:12.635    10:44:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:12.635    10:44:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:12.635    10:44:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:12.635    10:44:01	-- scripts/common.sh@335 -- # IFS=.-:
00:06:12.635    10:44:01	-- scripts/common.sh@335 -- # read -ra ver1
00:06:12.635    10:44:01	-- scripts/common.sh@336 -- # IFS=.-:
00:06:12.635    10:44:01	-- scripts/common.sh@336 -- # read -ra ver2
00:06:12.635    10:44:01	-- scripts/common.sh@337 -- # local 'op=<'
00:06:12.635    10:44:01	-- scripts/common.sh@339 -- # ver1_l=2
00:06:12.635    10:44:01	-- scripts/common.sh@340 -- # ver2_l=1
00:06:12.635    10:44:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:12.635    10:44:01	-- scripts/common.sh@343 -- # case "$op" in
00:06:12.635    10:44:01	-- scripts/common.sh@344 -- # : 1
00:06:12.635    10:44:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:12.635    10:44:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:12.635     10:44:01	-- scripts/common.sh@364 -- # decimal 1
00:06:12.635     10:44:01	-- scripts/common.sh@352 -- # local d=1
00:06:12.635     10:44:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:12.635     10:44:01	-- scripts/common.sh@354 -- # echo 1
00:06:12.635    10:44:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:12.635     10:44:01	-- scripts/common.sh@365 -- # decimal 2
00:06:12.635     10:44:01	-- scripts/common.sh@352 -- # local d=2
00:06:12.635     10:44:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:12.635     10:44:01	-- scripts/common.sh@354 -- # echo 2
00:06:12.635    10:44:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:12.635    10:44:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:12.635    10:44:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:12.635    10:44:01	-- scripts/common.sh@367 -- # return 0
00:06:12.635    10:44:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:12.635    10:44:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:12.635  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.635  		--rc genhtml_branch_coverage=1
00:06:12.635  		--rc genhtml_function_coverage=1
00:06:12.635  		--rc genhtml_legend=1
00:06:12.635  		--rc geninfo_all_blocks=1
00:06:12.635  		--rc geninfo_unexecuted_blocks=1
00:06:12.635  		
00:06:12.635  		'
00:06:12.635    10:44:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:12.635  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.635  		--rc genhtml_branch_coverage=1
00:06:12.635  		--rc genhtml_function_coverage=1
00:06:12.635  		--rc genhtml_legend=1
00:06:12.635  		--rc geninfo_all_blocks=1
00:06:12.635  		--rc geninfo_unexecuted_blocks=1
00:06:12.635  		
00:06:12.635  		'
00:06:12.635    10:44:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:12.635  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.635  		--rc genhtml_branch_coverage=1
00:06:12.635  		--rc genhtml_function_coverage=1
00:06:12.635  		--rc genhtml_legend=1
00:06:12.635  		--rc geninfo_all_blocks=1
00:06:12.635  		--rc geninfo_unexecuted_blocks=1
00:06:12.635  		
00:06:12.635  		'
00:06:12.635    10:44:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:12.635  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:12.635  		--rc genhtml_branch_coverage=1
00:06:12.635  		--rc genhtml_function_coverage=1
00:06:12.635  		--rc genhtml_legend=1
00:06:12.635  		--rc geninfo_all_blocks=1
00:06:12.635  		--rc geninfo_unexecuted_blocks=1
00:06:12.635  		
00:06:12.635  		'
00:06:12.635   10:44:01	-- event/event.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:06:12.635    10:44:01	-- bdev/nbd_common.sh@6 -- # set -e
00:06:12.635   10:44:01	-- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:06:12.635   10:44:01	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:06:12.635   10:44:01	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:12.635   10:44:01	-- common/autotest_common.sh@10 -- # set +x
00:06:12.635  ************************************
00:06:12.635  START TEST event_perf
00:06:12.635  ************************************
00:06:12.635   10:44:01	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:06:12.635  Running I/O for 1 seconds...[2024-12-15 10:44:01.536261] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:12.635  [2024-12-15 10:44:01.536353] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081083 ]
00:06:12.635  EAL: No free 2048 kB hugepages reported on node 1
00:06:12.635  [2024-12-15 10:44:01.640405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:12.894  [2024-12-15 10:44:01.739659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:12.894  [2024-12-15 10:44:01.739712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:12.894  [2024-12-15 10:44:01.739728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:12.894  [2024-12-15 10:44:01.739732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:13.831  Running I/O for 1 seconds...
00:06:13.831  lcore  0:   104153
00:06:13.831  lcore  1:   104155
00:06:13.831  lcore  2:   104153
00:06:13.831  lcore  3:   104151
00:06:14.091  done.
00:06:14.091  
00:06:14.091  real	0m1.339s
00:06:14.091  user	0m4.211s
00:06:14.091  sys	0m0.116s
00:06:14.091   10:44:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:14.091   10:44:02	-- common/autotest_common.sh@10 -- # set +x
00:06:14.091  ************************************
00:06:14.091  END TEST event_perf
00:06:14.091  ************************************
00:06:14.091   10:44:02	-- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:06:14.091   10:44:02	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:06:14.091   10:44:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:14.091   10:44:02	-- common/autotest_common.sh@10 -- # set +x
00:06:14.091  ************************************
00:06:14.091  START TEST event_reactor
00:06:14.091  ************************************
00:06:14.091   10:44:02	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:06:14.091  [2024-12-15 10:44:02.911764] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:14.091  [2024-12-15 10:44:02.911817] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081280 ]
00:06:14.091  EAL: No free 2048 kB hugepages reported on node 1
00:06:14.091  [2024-12-15 10:44:03.001341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:14.091  [2024-12-15 10:44:03.098372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:15.470  test_start
00:06:15.470  oneshot
00:06:15.470  tick 100
00:06:15.470  tick 100
00:06:15.470  tick 250
00:06:15.470  tick 100
00:06:15.470  tick 100
00:06:15.470  tick 100
00:06:15.470  tick 250
00:06:15.470  tick 500
00:06:15.470  tick 100
00:06:15.470  tick 100
00:06:15.470  tick 250
00:06:15.470  tick 100
00:06:15.470  tick 100
00:06:15.470  test_end
00:06:15.470  
00:06:15.470  real	0m1.310s
00:06:15.470  user	0m1.205s
00:06:15.470  sys	0m0.099s
00:06:15.470   10:44:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:15.470   10:44:04	-- common/autotest_common.sh@10 -- # set +x
00:06:15.470  ************************************
00:06:15.470  END TEST event_reactor
00:06:15.470  ************************************
00:06:15.470   10:44:04	-- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:15.470   10:44:04	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:06:15.470   10:44:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:15.470   10:44:04	-- common/autotest_common.sh@10 -- # set +x
00:06:15.470  ************************************
00:06:15.470  START TEST event_reactor_perf
00:06:15.470  ************************************
00:06:15.470   10:44:04	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:06:15.470  [2024-12-15 10:44:04.282853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:15.470  [2024-12-15 10:44:04.282921] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081479 ]
00:06:15.470  EAL: No free 2048 kB hugepages reported on node 1
00:06:15.470  [2024-12-15 10:44:04.385469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:15.470  [2024-12-15 10:44:04.482509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:16.848  test_start
00:06:16.848  test_end
00:06:16.848  Performance:   323018 events per second
00:06:16.848  
00:06:16.848  real	0m1.335s
00:06:16.848  user	0m1.218s
00:06:16.848  sys	0m0.111s
00:06:16.848   10:44:05	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:16.848   10:44:05	-- common/autotest_common.sh@10 -- # set +x
00:06:16.848  ************************************
00:06:16.848  END TEST event_reactor_perf
00:06:16.848  ************************************
00:06:16.848    10:44:05	-- event/event.sh@49 -- # uname -s
00:06:16.848   10:44:05	-- event/event.sh@49 -- # '[' Linux = Linux ']'
00:06:16.848   10:44:05	-- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:06:16.848   10:44:05	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:16.848   10:44:05	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:16.848   10:44:05	-- common/autotest_common.sh@10 -- # set +x
00:06:16.848  ************************************
00:06:16.848  START TEST event_scheduler
00:06:16.848  ************************************
00:06:16.848   10:44:05	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:06:16.848  * Looking for test storage...
00:06:16.848  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler
00:06:16.848    10:44:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:16.848     10:44:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:16.848     10:44:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:16.848    10:44:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:16.848    10:44:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:16.848    10:44:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:16.848    10:44:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:16.848    10:44:05	-- scripts/common.sh@335 -- # IFS=.-:
00:06:16.848    10:44:05	-- scripts/common.sh@335 -- # read -ra ver1
00:06:16.848    10:44:05	-- scripts/common.sh@336 -- # IFS=.-:
00:06:16.848    10:44:05	-- scripts/common.sh@336 -- # read -ra ver2
00:06:16.848    10:44:05	-- scripts/common.sh@337 -- # local 'op=<'
00:06:16.848    10:44:05	-- scripts/common.sh@339 -- # ver1_l=2
00:06:16.848    10:44:05	-- scripts/common.sh@340 -- # ver2_l=1
00:06:16.848    10:44:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:16.848    10:44:05	-- scripts/common.sh@343 -- # case "$op" in
00:06:16.848    10:44:05	-- scripts/common.sh@344 -- # : 1
00:06:16.848    10:44:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:16.848    10:44:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:16.848     10:44:05	-- scripts/common.sh@364 -- # decimal 1
00:06:16.848     10:44:05	-- scripts/common.sh@352 -- # local d=1
00:06:16.848     10:44:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:16.848     10:44:05	-- scripts/common.sh@354 -- # echo 1
00:06:16.848    10:44:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:16.848     10:44:05	-- scripts/common.sh@365 -- # decimal 2
00:06:16.848     10:44:05	-- scripts/common.sh@352 -- # local d=2
00:06:16.848     10:44:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:16.848     10:44:05	-- scripts/common.sh@354 -- # echo 2
00:06:16.848    10:44:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:16.848    10:44:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:16.848    10:44:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:16.848    10:44:05	-- scripts/common.sh@367 -- # return 0
00:06:16.848    10:44:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:16.848    10:44:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:16.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:16.848  		--rc genhtml_branch_coverage=1
00:06:16.848  		--rc genhtml_function_coverage=1
00:06:16.848  		--rc genhtml_legend=1
00:06:16.848  		--rc geninfo_all_blocks=1
00:06:16.848  		--rc geninfo_unexecuted_blocks=1
00:06:16.848  		
00:06:16.848  		'
00:06:16.848    10:44:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:16.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:16.848  		--rc genhtml_branch_coverage=1
00:06:16.848  		--rc genhtml_function_coverage=1
00:06:16.848  		--rc genhtml_legend=1
00:06:16.848  		--rc geninfo_all_blocks=1
00:06:16.848  		--rc geninfo_unexecuted_blocks=1
00:06:16.848  		
00:06:16.848  		'
00:06:16.848    10:44:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:16.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:16.848  		--rc genhtml_branch_coverage=1
00:06:16.848  		--rc genhtml_function_coverage=1
00:06:16.848  		--rc genhtml_legend=1
00:06:16.848  		--rc geninfo_all_blocks=1
00:06:16.848  		--rc geninfo_unexecuted_blocks=1
00:06:16.848  		
00:06:16.848  		'
00:06:16.848    10:44:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:16.848  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:16.848  		--rc genhtml_branch_coverage=1
00:06:16.848  		--rc genhtml_function_coverage=1
00:06:16.848  		--rc genhtml_legend=1
00:06:16.848  		--rc geninfo_all_blocks=1
00:06:16.848  		--rc geninfo_unexecuted_blocks=1
00:06:16.848  		
00:06:16.848  		'
00:06:16.848   10:44:05	-- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:06:16.848   10:44:05	-- scheduler/scheduler.sh@35 -- # scheduler_pid=2081712
00:06:16.848   10:44:05	-- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:06:16.848   10:44:05	-- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:06:16.848   10:44:05	-- scheduler/scheduler.sh@37 -- # waitforlisten 2081712
00:06:16.848   10:44:05	-- common/autotest_common.sh@829 -- # '[' -z 2081712 ']'
00:06:16.848   10:44:05	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:16.848   10:44:05	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:16.848   10:44:05	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:16.848  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:16.848   10:44:05	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:16.848   10:44:05	-- common/autotest_common.sh@10 -- # set +x
00:06:17.108  [2024-12-15 10:44:05.905835] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:17.108  [2024-12-15 10:44:05.905917] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081712 ]
00:06:17.108  EAL: No free 2048 kB hugepages reported on node 1
00:06:17.108  [2024-12-15 10:44:06.058082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:17.367  [2024-12-15 10:44:06.221947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:17.367  [2024-12-15 10:44:06.222048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:17.367  [2024-12-15 10:44:06.222131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:06:17.367  [2024-12-15 10:44:06.222143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:06:17.936   10:44:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:17.936   10:44:06	-- common/autotest_common.sh@862 -- # return 0
00:06:17.936   10:44:06	-- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:06:17.936   10:44:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:17.936   10:44:06	-- common/autotest_common.sh@10 -- # set +x
00:06:17.936  POWER: Env isn't set yet!
00:06:17.936  POWER: Attempting to initialise ACPI cpufreq power management...
00:06:17.936  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:06:17.936  POWER: Cannot set governor of lcore 0 to userspace
00:06:17.936  POWER: Attempting to initialise PSTAT power management...
00:06:17.936  POWER: Power management governor of lcore 0 has been set to 'performance' successfully
00:06:17.936  POWER: Initialized successfully for lcore 0 power management
00:06:17.936  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:06:17.936  POWER: Initialized successfully for lcore 1 power management
00:06:17.936  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:06:17.936  POWER: Initialized successfully for lcore 2 power management
00:06:17.936  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:06:17.936  POWER: Initialized successfully for lcore 3 power management
00:06:17.936  [2024-12-15 10:44:06.865825] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:06:17.936  [2024-12-15 10:44:06.865844] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:06:17.936  [2024-12-15 10:44:06.865856] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:06:17.936   10:44:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:17.936   10:44:06	-- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:06:17.936   10:44:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:17.936   10:44:06	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  [2024-12-15 10:44:07.076861] 'OCF_Core' volume operations registered
00:06:18.195  [2024-12-15 10:44:07.080660] 'OCF_Cache' volume operations registered
00:06:18.195  [2024-12-15 10:44:07.085003] 'OCF Composite' volume operations registered
00:06:18.195  [2024-12-15 10:44:07.088878] 'SPDK_block_device' volume operations registered
00:06:18.195  [2024-12-15 10:44:07.090105] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:06:18.195   10:44:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:18.195   10:44:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  ************************************
00:06:18.195  START TEST scheduler_create_thread
00:06:18.195  ************************************
00:06:18.195   10:44:07	-- common/autotest_common.sh@1114 -- # scheduler_create_thread
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  2
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  3
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  4
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  5
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  6
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  7
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  8
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:06:18.195   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.195   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.195  9
00:06:18.195   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.195   10:44:07	-- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:06:18.196   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.196   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.196  10
00:06:18.196   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.196    10:44:07	-- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:06:18.196    10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.196    10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.196    10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.196   10:44:07	-- scheduler/scheduler.sh@22 -- # thread_id=11
00:06:18.196   10:44:07	-- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:06:18.196   10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.196   10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:18.196   10:44:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:18.196    10:44:07	-- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:06:18.196    10:44:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:18.196    10:44:07	-- common/autotest_common.sh@10 -- # set +x
00:06:20.166    10:44:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:20.166   10:44:08	-- scheduler/scheduler.sh@25 -- # thread_id=12
00:06:20.166   10:44:08	-- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:06:20.166   10:44:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:20.166   10:44:08	-- common/autotest_common.sh@10 -- # set +x
00:06:20.740   10:44:09	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:20.740  
00:06:20.740  real	0m2.621s
00:06:20.740  user	0m0.022s
00:06:20.740  sys	0m0.009s
00:06:20.740   10:44:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:20.740   10:44:09	-- common/autotest_common.sh@10 -- # set +x
00:06:20.740  ************************************
00:06:20.740  END TEST scheduler_create_thread
00:06:20.740  ************************************
00:06:20.999   10:44:09	-- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:06:20.999   10:44:09	-- scheduler/scheduler.sh@46 -- # killprocess 2081712
00:06:20.999   10:44:09	-- common/autotest_common.sh@936 -- # '[' -z 2081712 ']'
00:06:20.999   10:44:09	-- common/autotest_common.sh@940 -- # kill -0 2081712
00:06:20.999    10:44:09	-- common/autotest_common.sh@941 -- # uname
00:06:20.999   10:44:09	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:20.999    10:44:09	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2081712
00:06:20.999   10:44:09	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:06:20.999   10:44:09	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:06:20.999   10:44:09	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2081712'
00:06:20.999  killing process with pid 2081712
00:06:20.999   10:44:09	-- common/autotest_common.sh@955 -- # kill 2081712
00:06:20.999   10:44:09	-- common/autotest_common.sh@960 -- # wait 2081712
00:06:21.259  [2024-12-15 10:44:10.201456] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:06:21.518  POWER: Power management governor of lcore 0 has been set to 'powersave' successfully
00:06:21.518  POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original
00:06:21.518  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:06:21.518  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:06:21.518  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:06:21.518  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:06:21.518  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:06:21.518  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:06:21.778  
00:06:21.778  real	0m4.946s
00:06:21.778  user	0m8.659s
00:06:21.778  sys	0m0.614s
00:06:21.778   10:44:10	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:21.778   10:44:10	-- common/autotest_common.sh@10 -- # set +x
00:06:21.778  ************************************
00:06:21.778  END TEST event_scheduler
00:06:21.778  ************************************
00:06:21.778   10:44:10	-- event/event.sh@51 -- # modprobe -n nbd
00:06:21.778   10:44:10	-- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:06:21.778   10:44:10	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:21.778   10:44:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:21.778   10:44:10	-- common/autotest_common.sh@10 -- # set +x
00:06:21.779  ************************************
00:06:21.779  START TEST app_repeat
00:06:21.779  ************************************
00:06:21.779   10:44:10	-- common/autotest_common.sh@1114 -- # app_repeat_test
00:06:21.779   10:44:10	-- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:21.779   10:44:10	-- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:21.779   10:44:10	-- event/event.sh@13 -- # local nbd_list
00:06:21.779   10:44:10	-- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:21.779   10:44:10	-- event/event.sh@14 -- # local bdev_list
00:06:21.779   10:44:10	-- event/event.sh@15 -- # local repeat_times=4
00:06:21.779   10:44:10	-- event/event.sh@17 -- # modprobe nbd
00:06:21.779   10:44:10	-- event/event.sh@19 -- # repeat_pid=2082481
00:06:21.779   10:44:10	-- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:06:21.779   10:44:10	-- event/event.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:06:21.779   10:44:10	-- event/event.sh@21 -- # echo 'Process app_repeat pid: 2082481'
00:06:21.779  Process app_repeat pid: 2082481
00:06:21.779   10:44:10	-- event/event.sh@23 -- # for i in {0..2}
00:06:21.779   10:44:10	-- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:06:21.779  spdk_app_start Round 0
00:06:21.779   10:44:10	-- event/event.sh@25 -- # waitforlisten 2082481 /var/tmp/spdk-nbd.sock
00:06:21.779   10:44:10	-- common/autotest_common.sh@829 -- # '[' -z 2082481 ']'
00:06:21.779   10:44:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:21.779   10:44:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:21.779   10:44:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:21.779  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:21.779   10:44:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:21.779   10:44:10	-- common/autotest_common.sh@10 -- # set +x
00:06:21.779  [2024-12-15 10:44:10.693751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:21.779  [2024-12-15 10:44:10.693845] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2082481 ]
00:06:21.779  EAL: No free 2048 kB hugepages reported on node 1
00:06:22.039  [2024-12-15 10:44:10.801820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:22.039  [2024-12-15 10:44:10.903392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:22.039  [2024-12-15 10:44:10.903397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:22.608   10:44:11	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:22.608   10:44:11	-- common/autotest_common.sh@862 -- # return 0
00:06:22.608   10:44:11	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:22.867  Malloc0
00:06:22.867   10:44:11	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:23.127  Malloc1
00:06:23.127   10:44:12	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@12 -- # local i
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:23.127   10:44:12	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:23.386  /dev/nbd0
00:06:23.386    10:44:12	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:23.386   10:44:12	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:23.386   10:44:12	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:06:23.386   10:44:12	-- common/autotest_common.sh@867 -- # local i
00:06:23.386   10:44:12	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:23.386   10:44:12	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:23.386   10:44:12	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:06:23.386   10:44:12	-- common/autotest_common.sh@871 -- # break
00:06:23.386   10:44:12	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:23.386   10:44:12	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:23.386   10:44:12	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:23.386  1+0 records in
00:06:23.386  1+0 records out
00:06:23.386  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230459 s, 17.8 MB/s
00:06:23.386    10:44:12	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:23.386   10:44:12	-- common/autotest_common.sh@884 -- # size=4096
00:06:23.386   10:44:12	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:23.386   10:44:12	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:23.386   10:44:12	-- common/autotest_common.sh@887 -- # return 0
00:06:23.386   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:23.386   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:23.386   10:44:12	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:23.645  /dev/nbd1
00:06:23.645    10:44:12	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:23.645   10:44:12	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:23.645   10:44:12	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:06:23.645   10:44:12	-- common/autotest_common.sh@867 -- # local i
00:06:23.645   10:44:12	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:23.645   10:44:12	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:23.645   10:44:12	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:06:23.645   10:44:12	-- common/autotest_common.sh@871 -- # break
00:06:23.645   10:44:12	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:23.645   10:44:12	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:23.645   10:44:12	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:23.645  1+0 records in
00:06:23.645  1+0 records out
00:06:23.645  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217402 s, 18.8 MB/s
00:06:23.645    10:44:12	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:23.645   10:44:12	-- common/autotest_common.sh@884 -- # size=4096
00:06:23.645   10:44:12	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:23.645   10:44:12	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:23.645   10:44:12	-- common/autotest_common.sh@887 -- # return 0
00:06:23.646   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:23.646   10:44:12	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:23.646    10:44:12	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:23.646    10:44:12	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:23.646     10:44:12	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:23.905    10:44:12	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:23.905    {
00:06:23.905      "nbd_device": "/dev/nbd0",
00:06:23.905      "bdev_name": "Malloc0"
00:06:23.905    },
00:06:23.905    {
00:06:23.905      "nbd_device": "/dev/nbd1",
00:06:23.905      "bdev_name": "Malloc1"
00:06:23.905    }
00:06:23.905  ]'
00:06:23.905     10:44:12	-- bdev/nbd_common.sh@64 -- # echo '[
00:06:23.905    {
00:06:23.905      "nbd_device": "/dev/nbd0",
00:06:23.905      "bdev_name": "Malloc0"
00:06:23.905    },
00:06:23.905    {
00:06:23.905      "nbd_device": "/dev/nbd1",
00:06:23.905      "bdev_name": "Malloc1"
00:06:23.905    }
00:06:23.905  ]'
00:06:23.905     10:44:12	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:23.905    10:44:12	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:23.905  /dev/nbd1'
00:06:23.905     10:44:12	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:23.905  /dev/nbd1'
00:06:23.905     10:44:12	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:23.905    10:44:12	-- bdev/nbd_common.sh@65 -- # count=2
00:06:23.905    10:44:12	-- bdev/nbd_common.sh@66 -- # echo 2
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@95 -- # count=2
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@71 -- # local operation=write
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:23.905  256+0 records in
00:06:23.905  256+0 records out
00:06:23.905  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115164 s, 91.1 MB/s
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:23.905   10:44:12	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:24.164  256+0 records in
00:06:24.164  256+0 records out
00:06:24.164  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185621 s, 56.5 MB/s
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:24.164  256+0 records in
00:06:24.164  256+0 records out
00:06:24.164  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0301189 s, 34.8 MB/s
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:24.164   10:44:12	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@51 -- # local i
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:24.165   10:44:12	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:24.424    10:44:13	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@41 -- # break
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@45 -- # return 0
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:24.424   10:44:13	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:24.684    10:44:13	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@41 -- # break
00:06:24.684   10:44:13	-- bdev/nbd_common.sh@45 -- # return 0
00:06:24.684    10:44:13	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:24.684    10:44:13	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:24.684     10:44:13	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:24.943    10:44:13	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:24.943     10:44:13	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:24.943     10:44:13	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:24.943    10:44:13	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:24.943     10:44:13	-- bdev/nbd_common.sh@65 -- # echo ''
00:06:24.943     10:44:13	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:24.943     10:44:13	-- bdev/nbd_common.sh@65 -- # true
00:06:24.943    10:44:13	-- bdev/nbd_common.sh@65 -- # count=0
00:06:24.943    10:44:13	-- bdev/nbd_common.sh@66 -- # echo 0
00:06:24.943   10:44:13	-- bdev/nbd_common.sh@104 -- # count=0
00:06:24.943   10:44:13	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:24.943   10:44:13	-- bdev/nbd_common.sh@109 -- # return 0
00:06:24.943   10:44:13	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:25.202   10:44:14	-- event/event.sh@35 -- # sleep 3
00:06:25.461  [2024-12-15 10:44:14.328065] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:25.461  [2024-12-15 10:44:14.423099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:25.461  [2024-12-15 10:44:14.423104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:25.461  [2024-12-15 10:44:14.475590] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:25.461  [2024-12-15 10:44:14.475651] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:28.750   10:44:17	-- event/event.sh@23 -- # for i in {0..2}
00:06:28.750   10:44:17	-- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:06:28.750  spdk_app_start Round 1
00:06:28.750   10:44:17	-- event/event.sh@25 -- # waitforlisten 2082481 /var/tmp/spdk-nbd.sock
00:06:28.750   10:44:17	-- common/autotest_common.sh@829 -- # '[' -z 2082481 ']'
00:06:28.750   10:44:17	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:28.750   10:44:17	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:28.750   10:44:17	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:28.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:28.750   10:44:17	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:28.750   10:44:17	-- common/autotest_common.sh@10 -- # set +x
00:06:28.750   10:44:17	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:28.750   10:44:17	-- common/autotest_common.sh@862 -- # return 0
00:06:28.750   10:44:17	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:28.750  Malloc0
00:06:28.750   10:44:17	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:29.009  Malloc1
00:06:29.009   10:44:17	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@12 -- # local i
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:29.009   10:44:17	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:29.268  /dev/nbd0
00:06:29.268    10:44:18	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:29.268   10:44:18	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:29.268   10:44:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:06:29.268   10:44:18	-- common/autotest_common.sh@867 -- # local i
00:06:29.268   10:44:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:29.268   10:44:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:29.268   10:44:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:06:29.268   10:44:18	-- common/autotest_common.sh@871 -- # break
00:06:29.268   10:44:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:29.268   10:44:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:29.268   10:44:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:29.268  1+0 records in
00:06:29.268  1+0 records out
00:06:29.268  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176107 s, 23.3 MB/s
00:06:29.268    10:44:18	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:29.268   10:44:18	-- common/autotest_common.sh@884 -- # size=4096
00:06:29.268   10:44:18	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:29.268   10:44:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:29.268   10:44:18	-- common/autotest_common.sh@887 -- # return 0
00:06:29.268   10:44:18	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:29.268   10:44:18	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:29.268   10:44:18	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:29.527  /dev/nbd1
00:06:29.527    10:44:18	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:29.527   10:44:18	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:29.527   10:44:18	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:06:29.527   10:44:18	-- common/autotest_common.sh@867 -- # local i
00:06:29.527   10:44:18	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:29.527   10:44:18	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:29.527   10:44:18	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:06:29.527   10:44:18	-- common/autotest_common.sh@871 -- # break
00:06:29.527   10:44:18	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:29.527   10:44:18	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:29.527   10:44:18	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:29.527  1+0 records in
00:06:29.527  1+0 records out
00:06:29.527  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212542 s, 19.3 MB/s
00:06:29.527    10:44:18	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:29.527   10:44:18	-- common/autotest_common.sh@884 -- # size=4096
00:06:29.527   10:44:18	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:29.527   10:44:18	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:29.527   10:44:18	-- common/autotest_common.sh@887 -- # return 0
00:06:29.527   10:44:18	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:29.527   10:44:18	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:29.527    10:44:18	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:29.527    10:44:18	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.527     10:44:18	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:29.787    10:44:18	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:29.787    {
00:06:29.787      "nbd_device": "/dev/nbd0",
00:06:29.787      "bdev_name": "Malloc0"
00:06:29.787    },
00:06:29.787    {
00:06:29.787      "nbd_device": "/dev/nbd1",
00:06:29.787      "bdev_name": "Malloc1"
00:06:29.787    }
00:06:29.787  ]'
00:06:29.787     10:44:18	-- bdev/nbd_common.sh@64 -- # echo '[
00:06:29.787    {
00:06:29.787      "nbd_device": "/dev/nbd0",
00:06:29.787      "bdev_name": "Malloc0"
00:06:29.787    },
00:06:29.787    {
00:06:29.787      "nbd_device": "/dev/nbd1",
00:06:29.787      "bdev_name": "Malloc1"
00:06:29.787    }
00:06:29.787  ]'
00:06:29.787     10:44:18	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:29.787    10:44:18	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:29.787  /dev/nbd1'
00:06:29.787     10:44:18	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:29.787  /dev/nbd1'
00:06:29.787     10:44:18	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:29.787    10:44:18	-- bdev/nbd_common.sh@65 -- # count=2
00:06:29.787    10:44:18	-- bdev/nbd_common.sh@66 -- # echo 2
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@95 -- # count=2
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@71 -- # local operation=write
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:29.787  256+0 records in
00:06:29.787  256+0 records out
00:06:29.787  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115814 s, 90.5 MB/s
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:29.787  256+0 records in
00:06:29.787  256+0 records out
00:06:29.787  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191913 s, 54.6 MB/s
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:29.787  256+0 records in
00:06:29.787  256+0 records out
00:06:29.787  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0298972 s, 35.1 MB/s
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@51 -- # local i
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:29.787   10:44:18	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:30.046    10:44:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:30.046   10:44:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:30.046   10:44:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:30.046   10:44:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:30.046   10:44:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:30.047   10:44:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:30.047   10:44:19	-- bdev/nbd_common.sh@41 -- # break
00:06:30.047   10:44:19	-- bdev/nbd_common.sh@45 -- # return 0
00:06:30.047   10:44:19	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:30.047   10:44:19	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:30.306    10:44:19	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@41 -- # break
00:06:30.306   10:44:19	-- bdev/nbd_common.sh@45 -- # return 0
00:06:30.566    10:44:19	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:30.566    10:44:19	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:30.566     10:44:19	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:30.825    10:44:19	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:30.825     10:44:19	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:30.825     10:44:19	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:30.825    10:44:19	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:30.825     10:44:19	-- bdev/nbd_common.sh@65 -- # echo ''
00:06:30.825     10:44:19	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:30.825     10:44:19	-- bdev/nbd_common.sh@65 -- # true
00:06:30.825    10:44:19	-- bdev/nbd_common.sh@65 -- # count=0
00:06:30.825    10:44:19	-- bdev/nbd_common.sh@66 -- # echo 0
00:06:30.825   10:44:19	-- bdev/nbd_common.sh@104 -- # count=0
00:06:30.825   10:44:19	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:30.825   10:44:19	-- bdev/nbd_common.sh@109 -- # return 0
00:06:30.825   10:44:19	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:31.084   10:44:19	-- event/event.sh@35 -- # sleep 3
00:06:31.343  [2024-12-15 10:44:20.153951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:31.343  [2024-12-15 10:44:20.246987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:31.343  [2024-12-15 10:44:20.246989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:31.343  [2024-12-15 10:44:20.298978] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:31.343  [2024-12-15 10:44:20.299030] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:34.635   10:44:22	-- event/event.sh@23 -- # for i in {0..2}
00:06:34.635   10:44:22	-- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:06:34.635  spdk_app_start Round 2
00:06:34.635   10:44:22	-- event/event.sh@25 -- # waitforlisten 2082481 /var/tmp/spdk-nbd.sock
00:06:34.635   10:44:22	-- common/autotest_common.sh@829 -- # '[' -z 2082481 ']'
00:06:34.635   10:44:22	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:34.635   10:44:22	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:34.635   10:44:22	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:34.635  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:34.635   10:44:22	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:34.635   10:44:22	-- common/autotest_common.sh@10 -- # set +x
00:06:34.635   10:44:23	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:34.635   10:44:23	-- common/autotest_common.sh@862 -- # return 0
00:06:34.635   10:44:23	-- event/event.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:34.635  Malloc0
00:06:34.635   10:44:23	-- event/event.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:06:34.893  Malloc1
00:06:34.894   10:44:23	-- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@12 -- # local i
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:06:34.894  /dev/nbd0
00:06:34.894    10:44:23	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:06:34.894   10:44:23	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:06:34.894   10:44:23	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:06:34.894   10:44:23	-- common/autotest_common.sh@867 -- # local i
00:06:34.894   10:44:23	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:34.894   10:44:23	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:34.894   10:44:23	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:06:34.894   10:44:23	-- common/autotest_common.sh@871 -- # break
00:06:34.894   10:44:23	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:34.894   10:44:23	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:34.894   10:44:23	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:34.894  1+0 records in
00:06:34.894  1+0 records out
00:06:34.894  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027511 s, 14.9 MB/s
00:06:34.894    10:44:23	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:35.153   10:44:23	-- common/autotest_common.sh@884 -- # size=4096
00:06:35.153   10:44:23	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:35.153   10:44:23	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:35.153   10:44:23	-- common/autotest_common.sh@887 -- # return 0
00:06:35.153   10:44:23	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:35.153   10:44:23	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:35.153   10:44:23	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:06:35.153  /dev/nbd1
00:06:35.412    10:44:24	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:06:35.412   10:44:24	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:06:35.412   10:44:24	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:06:35.412   10:44:24	-- common/autotest_common.sh@867 -- # local i
00:06:35.412   10:44:24	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:06:35.412   10:44:24	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:06:35.412   10:44:24	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:06:35.412   10:44:24	-- common/autotest_common.sh@871 -- # break
00:06:35.412   10:44:24	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:06:35.412   10:44:24	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:06:35.412   10:44:24	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:06:35.412  1+0 records in
00:06:35.412  1+0 records out
00:06:35.412  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208295 s, 19.7 MB/s
00:06:35.412    10:44:24	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:35.412   10:44:24	-- common/autotest_common.sh@884 -- # size=4096
00:06:35.412   10:44:24	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdtest
00:06:35.412   10:44:24	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:06:35.412   10:44:24	-- common/autotest_common.sh@887 -- # return 0
00:06:35.412   10:44:24	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:06:35.412   10:44:24	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:06:35.412    10:44:24	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:35.412    10:44:24	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:35.412     10:44:24	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:35.672    10:44:24	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:06:35.672    {
00:06:35.672      "nbd_device": "/dev/nbd0",
00:06:35.672      "bdev_name": "Malloc0"
00:06:35.672    },
00:06:35.672    {
00:06:35.672      "nbd_device": "/dev/nbd1",
00:06:35.672      "bdev_name": "Malloc1"
00:06:35.672    }
00:06:35.672  ]'
00:06:35.672     10:44:24	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:35.672     10:44:24	-- bdev/nbd_common.sh@64 -- # echo '[
00:06:35.672    {
00:06:35.672      "nbd_device": "/dev/nbd0",
00:06:35.672      "bdev_name": "Malloc0"
00:06:35.672    },
00:06:35.672    {
00:06:35.672      "nbd_device": "/dev/nbd1",
00:06:35.672      "bdev_name": "Malloc1"
00:06:35.672    }
00:06:35.672  ]'
00:06:35.672    10:44:24	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:06:35.672  /dev/nbd1'
00:06:35.672     10:44:24	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:06:35.672  /dev/nbd1'
00:06:35.672     10:44:24	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:35.672    10:44:24	-- bdev/nbd_common.sh@65 -- # count=2
00:06:35.672    10:44:24	-- bdev/nbd_common.sh@66 -- # echo 2
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@95 -- # count=2
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@71 -- # local operation=write
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:06:35.672  256+0 records in
00:06:35.672  256+0 records out
00:06:35.672  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113042 s, 92.8 MB/s
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:06:35.672  256+0 records in
00:06:35.672  256+0 records out
00:06:35.672  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029017 s, 36.1 MB/s
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:06:35.672  256+0 records in
00:06:35.672  256+0 records out
00:06:35.672  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303853 s, 34.5 MB/s
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/nbdrandtest
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@51 -- # local i
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:35.672   10:44:24	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:06:35.932    10:44:24	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@41 -- # break
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@45 -- # return 0
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:06:35.932   10:44:24	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:06:36.191    10:44:25	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@41 -- # break
00:06:36.191   10:44:25	-- bdev/nbd_common.sh@45 -- # return 0
00:06:36.191    10:44:25	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:06:36.191    10:44:25	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:06:36.191     10:44:25	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:06:36.451    10:44:25	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:06:36.451     10:44:25	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:06:36.452     10:44:25	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:06:36.452    10:44:25	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:06:36.452     10:44:25	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:06:36.452     10:44:25	-- bdev/nbd_common.sh@65 -- # echo ''
00:06:36.452     10:44:25	-- bdev/nbd_common.sh@65 -- # true
00:06:36.452    10:44:25	-- bdev/nbd_common.sh@65 -- # count=0
00:06:36.452    10:44:25	-- bdev/nbd_common.sh@66 -- # echo 0
00:06:36.452   10:44:25	-- bdev/nbd_common.sh@104 -- # count=0
00:06:36.452   10:44:25	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:06:36.452   10:44:25	-- bdev/nbd_common.sh@109 -- # return 0
00:06:36.452   10:44:25	-- event/event.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:06:36.711   10:44:25	-- event/event.sh@35 -- # sleep 3
00:06:36.970  [2024-12-15 10:44:25.813012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:36.970  [2024-12-15 10:44:25.907526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:06:36.970  [2024-12-15 10:44:25.907530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:36.970  [2024-12-15 10:44:25.960001] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:06:36.970  [2024-12-15 10:44:25.960054] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:06:40.263   10:44:28	-- event/event.sh@38 -- # waitforlisten 2082481 /var/tmp/spdk-nbd.sock
00:06:40.263   10:44:28	-- common/autotest_common.sh@829 -- # '[' -z 2082481 ']'
00:06:40.263   10:44:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:06:40.263   10:44:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:40.263   10:44:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:06:40.263  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:06:40.263   10:44:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:40.263   10:44:28	-- common/autotest_common.sh@10 -- # set +x
00:06:40.263   10:44:28	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:40.263   10:44:28	-- common/autotest_common.sh@862 -- # return 0
00:06:40.263   10:44:28	-- event/event.sh@39 -- # killprocess 2082481
00:06:40.263   10:44:28	-- common/autotest_common.sh@936 -- # '[' -z 2082481 ']'
00:06:40.264   10:44:28	-- common/autotest_common.sh@940 -- # kill -0 2082481
00:06:40.264    10:44:28	-- common/autotest_common.sh@941 -- # uname
00:06:40.264   10:44:28	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:40.264    10:44:28	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2082481
00:06:40.264   10:44:28	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:40.264   10:44:28	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:40.264   10:44:28	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2082481'
00:06:40.264  killing process with pid 2082481
00:06:40.264   10:44:28	-- common/autotest_common.sh@955 -- # kill 2082481
00:06:40.264   10:44:28	-- common/autotest_common.sh@960 -- # wait 2082481
00:06:40.264  spdk_app_start is called in Round 0.
00:06:40.264  Shutdown signal received, stop current app iteration
00:06:40.264  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:06:40.264  spdk_app_start is called in Round 1.
00:06:40.264  Shutdown signal received, stop current app iteration
00:06:40.264  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:06:40.264  spdk_app_start is called in Round 2.
00:06:40.264  Shutdown signal received, stop current app iteration
00:06:40.264  Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization...
00:06:40.264  spdk_app_start is called in Round 3.
00:06:40.264  Shutdown signal received, stop current app iteration
00:06:40.264   10:44:29	-- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:06:40.264   10:44:29	-- event/event.sh@42 -- # return 0
00:06:40.264  
00:06:40.264  real	0m18.433s
00:06:40.264  user	0m40.005s
00:06:40.264  sys	0m3.548s
00:06:40.264   10:44:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:40.264   10:44:29	-- common/autotest_common.sh@10 -- # set +x
00:06:40.264  ************************************
00:06:40.264  END TEST app_repeat
00:06:40.264  ************************************
00:06:40.264   10:44:29	-- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:06:40.264   10:44:29	-- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:40.264   10:44:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:40.264   10:44:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:40.264   10:44:29	-- common/autotest_common.sh@10 -- # set +x
00:06:40.264  ************************************
00:06:40.264  START TEST cpu_locks
00:06:40.264  ************************************
00:06:40.264   10:44:29	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/cpu_locks.sh
00:06:40.264  * Looking for test storage...
00:06:40.264  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event
00:06:40.264    10:44:29	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:06:40.264     10:44:29	-- common/autotest_common.sh@1690 -- # lcov --version
00:06:40.264     10:44:29	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:06:40.524    10:44:29	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:06:40.524    10:44:29	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:06:40.524    10:44:29	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:06:40.524    10:44:29	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:06:40.524    10:44:29	-- scripts/common.sh@335 -- # IFS=.-:
00:06:40.524    10:44:29	-- scripts/common.sh@335 -- # read -ra ver1
00:06:40.524    10:44:29	-- scripts/common.sh@336 -- # IFS=.-:
00:06:40.524    10:44:29	-- scripts/common.sh@336 -- # read -ra ver2
00:06:40.524    10:44:29	-- scripts/common.sh@337 -- # local 'op=<'
00:06:40.524    10:44:29	-- scripts/common.sh@339 -- # ver1_l=2
00:06:40.524    10:44:29	-- scripts/common.sh@340 -- # ver2_l=1
00:06:40.524    10:44:29	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:06:40.524    10:44:29	-- scripts/common.sh@343 -- # case "$op" in
00:06:40.524    10:44:29	-- scripts/common.sh@344 -- # : 1
00:06:40.524    10:44:29	-- scripts/common.sh@363 -- # (( v = 0 ))
00:06:40.524    10:44:29	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:40.524     10:44:29	-- scripts/common.sh@364 -- # decimal 1
00:06:40.524     10:44:29	-- scripts/common.sh@352 -- # local d=1
00:06:40.524     10:44:29	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:40.524     10:44:29	-- scripts/common.sh@354 -- # echo 1
00:06:40.524    10:44:29	-- scripts/common.sh@364 -- # ver1[v]=1
00:06:40.524     10:44:29	-- scripts/common.sh@365 -- # decimal 2
00:06:40.524     10:44:29	-- scripts/common.sh@352 -- # local d=2
00:06:40.524     10:44:29	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:40.524     10:44:29	-- scripts/common.sh@354 -- # echo 2
00:06:40.524    10:44:29	-- scripts/common.sh@365 -- # ver2[v]=2
00:06:40.524    10:44:29	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:06:40.524    10:44:29	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:06:40.524    10:44:29	-- scripts/common.sh@367 -- # return 0
00:06:40.524    10:44:29	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:40.524    10:44:29	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:06:40.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.524  		--rc genhtml_branch_coverage=1
00:06:40.524  		--rc genhtml_function_coverage=1
00:06:40.524  		--rc genhtml_legend=1
00:06:40.524  		--rc geninfo_all_blocks=1
00:06:40.524  		--rc geninfo_unexecuted_blocks=1
00:06:40.524  		
00:06:40.524  		'
00:06:40.524    10:44:29	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:06:40.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.524  		--rc genhtml_branch_coverage=1
00:06:40.524  		--rc genhtml_function_coverage=1
00:06:40.524  		--rc genhtml_legend=1
00:06:40.524  		--rc geninfo_all_blocks=1
00:06:40.524  		--rc geninfo_unexecuted_blocks=1
00:06:40.524  		
00:06:40.524  		'
00:06:40.524    10:44:29	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:06:40.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.524  		--rc genhtml_branch_coverage=1
00:06:40.524  		--rc genhtml_function_coverage=1
00:06:40.524  		--rc genhtml_legend=1
00:06:40.524  		--rc geninfo_all_blocks=1
00:06:40.524  		--rc geninfo_unexecuted_blocks=1
00:06:40.524  		
00:06:40.524  		'
00:06:40.524    10:44:29	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:06:40.524  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:40.524  		--rc genhtml_branch_coverage=1
00:06:40.524  		--rc genhtml_function_coverage=1
00:06:40.524  		--rc genhtml_legend=1
00:06:40.524  		--rc geninfo_all_blocks=1
00:06:40.524  		--rc geninfo_unexecuted_blocks=1
00:06:40.524  		
00:06:40.524  		'
00:06:40.524   10:44:29	-- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:06:40.524   10:44:29	-- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:06:40.524   10:44:29	-- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:06:40.524   10:44:29	-- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:06:40.524   10:44:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:40.524   10:44:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:40.524   10:44:29	-- common/autotest_common.sh@10 -- # set +x
00:06:40.524  ************************************
00:06:40.524  START TEST default_locks
00:06:40.524  ************************************
00:06:40.524   10:44:29	-- common/autotest_common.sh@1114 -- # default_locks
00:06:40.524   10:44:29	-- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2085216
00:06:40.524   10:44:29	-- event/cpu_locks.sh@47 -- # waitforlisten 2085216
00:06:40.524   10:44:29	-- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:40.524   10:44:29	-- common/autotest_common.sh@829 -- # '[' -z 2085216 ']'
00:06:40.524   10:44:29	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:40.524   10:44:29	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:40.524   10:44:29	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:40.524  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:40.524   10:44:29	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:40.524   10:44:29	-- common/autotest_common.sh@10 -- # set +x
00:06:40.524  [2024-12-15 10:44:29.409341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:40.524  [2024-12-15 10:44:29.409423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085216 ]
00:06:40.524  EAL: No free 2048 kB hugepages reported on node 1
00:06:40.524  [2024-12-15 10:44:29.516729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:40.788  [2024-12-15 10:44:29.610975] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:40.788  [2024-12-15 10:44:29.611141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:40.788  [2024-12-15 10:44:29.801922] 'OCF_Core' volume operations registered
00:06:41.051  [2024-12-15 10:44:29.805402] 'OCF_Cache' volume operations registered
00:06:41.051  [2024-12-15 10:44:29.809360] 'OCF Composite' volume operations registered
00:06:41.051  [2024-12-15 10:44:29.812925] 'SPDK_block_device' volume operations registered
00:06:41.310   10:44:30	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:41.310   10:44:30	-- common/autotest_common.sh@862 -- # return 0
00:06:41.310   10:44:30	-- event/cpu_locks.sh@49 -- # locks_exist 2085216
00:06:41.310   10:44:30	-- event/cpu_locks.sh@22 -- # lslocks -p 2085216
00:06:41.310   10:44:30	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:42.689  lslocks: write error
00:06:42.689   10:44:31	-- event/cpu_locks.sh@50 -- # killprocess 2085216
00:06:42.689   10:44:31	-- common/autotest_common.sh@936 -- # '[' -z 2085216 ']'
00:06:42.689   10:44:31	-- common/autotest_common.sh@940 -- # kill -0 2085216
00:06:42.689    10:44:31	-- common/autotest_common.sh@941 -- # uname
00:06:42.689   10:44:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:42.689    10:44:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2085216
00:06:42.689   10:44:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:42.689   10:44:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:42.689   10:44:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2085216'
00:06:42.689  killing process with pid 2085216
00:06:42.689   10:44:31	-- common/autotest_common.sh@955 -- # kill 2085216
00:06:42.689   10:44:31	-- common/autotest_common.sh@960 -- # wait 2085216
00:06:43.257   10:44:31	-- event/cpu_locks.sh@52 -- # NOT waitforlisten 2085216
00:06:43.257   10:44:31	-- common/autotest_common.sh@650 -- # local es=0
00:06:43.257   10:44:31	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2085216
00:06:43.257   10:44:31	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:06:43.257   10:44:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:43.257    10:44:31	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:06:43.257   10:44:31	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:43.257   10:44:31	-- common/autotest_common.sh@653 -- # waitforlisten 2085216
00:06:43.257   10:44:31	-- common/autotest_common.sh@829 -- # '[' -z 2085216 ']'
00:06:43.257   10:44:31	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:43.257   10:44:31	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:43.257   10:44:31	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:43.257  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:43.257   10:44:31	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:43.257   10:44:31	-- common/autotest_common.sh@10 -- # set +x
00:06:43.257  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2085216) - No such process
00:06:43.258  ERROR: process (pid: 2085216) is no longer running
00:06:43.258   10:44:31	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:43.258   10:44:31	-- common/autotest_common.sh@862 -- # return 1
00:06:43.258   10:44:31	-- common/autotest_common.sh@653 -- # es=1
00:06:43.258   10:44:31	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:06:43.258   10:44:31	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:06:43.258   10:44:31	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:06:43.258   10:44:31	-- event/cpu_locks.sh@54 -- # no_locks
00:06:43.258   10:44:31	-- event/cpu_locks.sh@26 -- # lock_files=()
00:06:43.258   10:44:31	-- event/cpu_locks.sh@26 -- # local lock_files
00:06:43.258   10:44:31	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:43.258  
00:06:43.258  real	0m2.635s
00:06:43.258  user	0m2.666s
00:06:43.258  sys	0m1.189s
00:06:43.258   10:44:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:43.258   10:44:31	-- common/autotest_common.sh@10 -- # set +x
00:06:43.258  ************************************
00:06:43.258  END TEST default_locks
00:06:43.258  ************************************
00:06:43.258   10:44:32	-- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:06:43.258   10:44:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:43.258   10:44:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:43.258   10:44:32	-- common/autotest_common.sh@10 -- # set +x
00:06:43.258  ************************************
00:06:43.258  START TEST default_locks_via_rpc
00:06:43.258  ************************************
00:06:43.258   10:44:32	-- common/autotest_common.sh@1114 -- # default_locks_via_rpc
00:06:43.258   10:44:32	-- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2085598
00:06:43.258   10:44:32	-- event/cpu_locks.sh@63 -- # waitforlisten 2085598
00:06:43.258   10:44:32	-- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:43.258   10:44:32	-- common/autotest_common.sh@829 -- # '[' -z 2085598 ']'
00:06:43.258   10:44:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:43.258   10:44:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:43.258   10:44:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:43.258  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:43.258   10:44:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:43.258   10:44:32	-- common/autotest_common.sh@10 -- # set +x
00:06:43.258  [2024-12-15 10:44:32.098822] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:43.258  [2024-12-15 10:44:32.098905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085598 ]
00:06:43.258  EAL: No free 2048 kB hugepages reported on node 1
00:06:43.258  [2024-12-15 10:44:32.205068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:43.517  [2024-12-15 10:44:32.302694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:43.517  [2024-12-15 10:44:32.302864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:43.517  [2024-12-15 10:44:32.489907] 'OCF_Core' volume operations registered
00:06:43.517  [2024-12-15 10:44:32.493158] 'OCF_Cache' volume operations registered
00:06:43.517  [2024-12-15 10:44:32.496789] 'OCF Composite' volume operations registered
00:06:43.517  [2024-12-15 10:44:32.500042] 'SPDK_block_device' volume operations registered
00:06:44.086   10:44:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:44.086   10:44:32	-- common/autotest_common.sh@862 -- # return 0
00:06:44.086   10:44:32	-- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:06:44.086   10:44:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:44.086   10:44:32	-- common/autotest_common.sh@10 -- # set +x
00:06:44.086   10:44:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:44.086   10:44:32	-- event/cpu_locks.sh@67 -- # no_locks
00:06:44.086   10:44:33	-- event/cpu_locks.sh@26 -- # lock_files=()
00:06:44.086   10:44:33	-- event/cpu_locks.sh@26 -- # local lock_files
00:06:44.086   10:44:33	-- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:06:44.086   10:44:33	-- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:06:44.086   10:44:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:06:44.086   10:44:33	-- common/autotest_common.sh@10 -- # set +x
00:06:44.086   10:44:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:06:44.086   10:44:33	-- event/cpu_locks.sh@71 -- # locks_exist 2085598
00:06:44.086   10:44:33	-- event/cpu_locks.sh@22 -- # lslocks -p 2085598
00:06:44.086   10:44:33	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:45.025   10:44:33	-- event/cpu_locks.sh@73 -- # killprocess 2085598
00:06:45.025   10:44:33	-- common/autotest_common.sh@936 -- # '[' -z 2085598 ']'
00:06:45.025   10:44:33	-- common/autotest_common.sh@940 -- # kill -0 2085598
00:06:45.025    10:44:33	-- common/autotest_common.sh@941 -- # uname
00:06:45.025   10:44:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:45.025    10:44:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2085598
00:06:45.025   10:44:33	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:45.025   10:44:33	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:45.025   10:44:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2085598'
00:06:45.025  killing process with pid 2085598
00:06:45.025   10:44:33	-- common/autotest_common.sh@955 -- # kill 2085598
00:06:45.025   10:44:33	-- common/autotest_common.sh@960 -- # wait 2085598
00:06:45.594  
00:06:45.594  real	0m2.501s
00:06:45.594  user	0m2.572s
00:06:45.594  sys	0m1.032s
00:06:45.594   10:44:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:45.594   10:44:34	-- common/autotest_common.sh@10 -- # set +x
00:06:45.594  ************************************
00:06:45.594  END TEST default_locks_via_rpc
00:06:45.594  ************************************
00:06:45.594   10:44:34	-- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:06:45.595   10:44:34	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:45.595   10:44:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:45.595   10:44:34	-- common/autotest_common.sh@10 -- # set +x
00:06:45.595  ************************************
00:06:45.595  START TEST non_locking_app_on_locked_coremask
00:06:45.595  ************************************
00:06:45.595   10:44:34	-- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask
00:06:45.595   10:44:34	-- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2085974
00:06:45.595   10:44:34	-- event/cpu_locks.sh@81 -- # waitforlisten 2085974 /var/tmp/spdk.sock
00:06:45.595   10:44:34	-- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:45.595   10:44:34	-- common/autotest_common.sh@829 -- # '[' -z 2085974 ']'
00:06:45.595   10:44:34	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:45.595   10:44:34	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:45.595   10:44:34	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:45.595  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:45.595   10:44:34	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:45.595   10:44:34	-- common/autotest_common.sh@10 -- # set +x
00:06:45.854  [2024-12-15 10:44:34.646574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:45.854  [2024-12-15 10:44:34.646654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085974 ]
00:06:45.854  EAL: No free 2048 kB hugepages reported on node 1
00:06:45.854  [2024-12-15 10:44:34.749992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:45.854  [2024-12-15 10:44:34.855139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:45.854  [2024-12-15 10:44:34.855291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:46.113  [2024-12-15 10:44:35.040930] 'OCF_Core' volume operations registered
00:06:46.113  [2024-12-15 10:44:35.044136] 'OCF_Cache' volume operations registered
00:06:46.113  [2024-12-15 10:44:35.047729] 'OCF Composite' volume operations registered
00:06:46.113  [2024-12-15 10:44:35.050932] 'SPDK_block_device' volume operations registered
00:06:46.681   10:44:35	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:46.681   10:44:35	-- common/autotest_common.sh@862 -- # return 0
00:06:46.681   10:44:35	-- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2086007
00:06:46.681   10:44:35	-- event/cpu_locks.sh@85 -- # waitforlisten 2086007 /var/tmp/spdk2.sock
00:06:46.681   10:44:35	-- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:06:46.681   10:44:35	-- common/autotest_common.sh@829 -- # '[' -z 2086007 ']'
00:06:46.681   10:44:35	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:46.681   10:44:35	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:46.681   10:44:35	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:46.681  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:46.681   10:44:35	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:46.681   10:44:35	-- common/autotest_common.sh@10 -- # set +x
00:06:46.681  [2024-12-15 10:44:35.668311] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:46.681  [2024-12-15 10:44:35.668385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086007 ]
00:06:46.941  EAL: No free 2048 kB hugepages reported on node 1
00:06:46.941  [2024-12-15 10:44:35.809465] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:46.941  [2024-12-15 10:44:35.809509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:47.200  [2024-12-15 10:44:36.015985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:47.200  [2024-12-15 10:44:36.016157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:47.459  [2024-12-15 10:44:36.380475] 'OCF_Core' volume operations registered
00:06:47.459  [2024-12-15 10:44:36.387749] 'OCF_Cache' volume operations registered
00:06:47.459  [2024-12-15 10:44:36.391365] 'OCF Composite' volume operations registered
00:06:47.459  [2024-12-15 10:44:36.398646] 'SPDK_block_device' volume operations registered
00:06:48.397   10:44:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:48.397   10:44:37	-- common/autotest_common.sh@862 -- # return 0
00:06:48.397   10:44:37	-- event/cpu_locks.sh@87 -- # locks_exist 2085974
00:06:48.397   10:44:37	-- event/cpu_locks.sh@22 -- # lslocks -p 2085974
00:06:48.397   10:44:37	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:50.934  lslocks: write error
00:06:50.934   10:44:39	-- event/cpu_locks.sh@89 -- # killprocess 2085974
00:06:50.934   10:44:39	-- common/autotest_common.sh@936 -- # '[' -z 2085974 ']'
00:06:50.934   10:44:39	-- common/autotest_common.sh@940 -- # kill -0 2085974
00:06:50.934    10:44:39	-- common/autotest_common.sh@941 -- # uname
00:06:50.934   10:44:39	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:50.934    10:44:39	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2085974
00:06:50.934   10:44:39	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:50.934   10:44:39	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:50.934   10:44:39	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2085974'
00:06:50.934  killing process with pid 2085974
00:06:50.934   10:44:39	-- common/autotest_common.sh@955 -- # kill 2085974
00:06:50.934   10:44:39	-- common/autotest_common.sh@960 -- # wait 2085974
00:06:51.872   10:44:40	-- event/cpu_locks.sh@90 -- # killprocess 2086007
00:06:51.872   10:44:40	-- common/autotest_common.sh@936 -- # '[' -z 2086007 ']'
00:06:51.872   10:44:40	-- common/autotest_common.sh@940 -- # kill -0 2086007
00:06:51.872    10:44:40	-- common/autotest_common.sh@941 -- # uname
00:06:51.872   10:44:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:51.872    10:44:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2086007
00:06:51.872   10:44:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:51.872   10:44:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:51.872   10:44:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2086007'
00:06:51.872  killing process with pid 2086007
00:06:51.872   10:44:40	-- common/autotest_common.sh@955 -- # kill 2086007
00:06:51.872   10:44:40	-- common/autotest_common.sh@960 -- # wait 2086007
00:06:52.441  
00:06:52.441  real	0m6.825s
00:06:52.441  user	0m7.473s
00:06:52.441  sys	0m2.303s
00:06:52.441   10:44:41	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:52.441   10:44:41	-- common/autotest_common.sh@10 -- # set +x
00:06:52.441  ************************************
00:06:52.441  END TEST non_locking_app_on_locked_coremask
00:06:52.441  ************************************
00:06:52.701   10:44:41	-- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:06:52.701   10:44:41	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:52.701   10:44:41	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:52.701   10:44:41	-- common/autotest_common.sh@10 -- # set +x
00:06:52.701  ************************************
00:06:52.701  START TEST locking_app_on_unlocked_coremask
00:06:52.701  ************************************
00:06:52.701   10:44:41	-- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask
00:06:52.701   10:44:41	-- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2086928
00:06:52.701   10:44:41	-- event/cpu_locks.sh@99 -- # waitforlisten 2086928 /var/tmp/spdk.sock
00:06:52.701   10:44:41	-- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:06:52.701   10:44:41	-- common/autotest_common.sh@829 -- # '[' -z 2086928 ']'
00:06:52.701   10:44:41	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:52.701   10:44:41	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:52.701   10:44:41	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:52.701  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:52.701   10:44:41	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:52.701   10:44:41	-- common/autotest_common.sh@10 -- # set +x
00:06:52.701  [2024-12-15 10:44:41.526096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:52.701  [2024-12-15 10:44:41.526169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086928 ]
00:06:52.701  EAL: No free 2048 kB hugepages reported on node 1
00:06:52.701  [2024-12-15 10:44:41.631755] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:06:52.701  [2024-12-15 10:44:41.631793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:52.961  [2024-12-15 10:44:41.737268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:52.961  [2024-12-15 10:44:41.737429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:52.961  [2024-12-15 10:44:41.937674] 'OCF_Core' volume operations registered
00:06:52.961  [2024-12-15 10:44:41.941184] 'OCF_Cache' volume operations registered
00:06:52.961  [2024-12-15 10:44:41.945136] 'OCF Composite' volume operations registered
00:06:52.961  [2024-12-15 10:44:41.948643] 'SPDK_block_device' volume operations registered
00:06:53.529   10:44:42	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:53.529   10:44:42	-- common/autotest_common.sh@862 -- # return 0
00:06:53.529   10:44:42	-- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:53.529   10:44:42	-- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2086964
00:06:53.529   10:44:42	-- event/cpu_locks.sh@103 -- # waitforlisten 2086964 /var/tmp/spdk2.sock
00:06:53.529   10:44:42	-- common/autotest_common.sh@829 -- # '[' -z 2086964 ']'
00:06:53.529   10:44:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:53.529   10:44:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:53.529   10:44:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:53.529  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:53.529   10:44:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:53.529   10:44:42	-- common/autotest_common.sh@10 -- # set +x
00:06:53.529  [2024-12-15 10:44:42.523521] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:53.529  [2024-12-15 10:44:42.523594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2086964 ]
00:06:53.788  EAL: No free 2048 kB hugepages reported on node 1
00:06:53.789  [2024-12-15 10:44:42.661504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:54.048  [2024-12-15 10:44:42.854259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:54.048  [2024-12-15 10:44:42.854417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:54.307  [2024-12-15 10:44:43.243003] 'OCF_Core' volume operations registered
00:06:54.307  [2024-12-15 10:44:43.246270] 'OCF_Cache' volume operations registered
00:06:54.307  [2024-12-15 10:44:43.253982] 'OCF Composite' volume operations registered
00:06:54.307  [2024-12-15 10:44:43.261276] 'SPDK_block_device' volume operations registered
00:06:55.244   10:44:44	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:55.244   10:44:44	-- common/autotest_common.sh@862 -- # return 0
00:06:55.244   10:44:44	-- event/cpu_locks.sh@105 -- # locks_exist 2086964
00:06:55.244   10:44:44	-- event/cpu_locks.sh@22 -- # lslocks -p 2086964
00:06:55.244   10:44:44	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:06:57.780  lslocks: write error
00:06:57.780   10:44:46	-- event/cpu_locks.sh@107 -- # killprocess 2086928
00:06:57.780   10:44:46	-- common/autotest_common.sh@936 -- # '[' -z 2086928 ']'
00:06:57.780   10:44:46	-- common/autotest_common.sh@940 -- # kill -0 2086928
00:06:57.780    10:44:46	-- common/autotest_common.sh@941 -- # uname
00:06:57.780   10:44:46	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:57.780    10:44:46	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2086928
00:06:57.780   10:44:46	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:57.780   10:44:46	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:57.780   10:44:46	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2086928'
00:06:57.780  killing process with pid 2086928
00:06:57.780   10:44:46	-- common/autotest_common.sh@955 -- # kill 2086928
00:06:57.780   10:44:46	-- common/autotest_common.sh@960 -- # wait 2086928
00:06:58.348   10:44:47	-- event/cpu_locks.sh@108 -- # killprocess 2086964
00:06:58.348   10:44:47	-- common/autotest_common.sh@936 -- # '[' -z 2086964 ']'
00:06:58.348   10:44:47	-- common/autotest_common.sh@940 -- # kill -0 2086964
00:06:58.348    10:44:47	-- common/autotest_common.sh@941 -- # uname
00:06:58.348   10:44:47	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:06:58.348    10:44:47	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2086964
00:06:58.607   10:44:47	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:06:58.607   10:44:47	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:06:58.607   10:44:47	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2086964'
00:06:58.607  killing process with pid 2086964
00:06:58.607   10:44:47	-- common/autotest_common.sh@955 -- # kill 2086964
00:06:58.607   10:44:47	-- common/autotest_common.sh@960 -- # wait 2086964
00:06:59.176  
00:06:59.176  real	0m6.471s
00:06:59.176  user	0m6.865s
00:06:59.176  sys	0m2.344s
00:06:59.176   10:44:47	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:06:59.176   10:44:47	-- common/autotest_common.sh@10 -- # set +x
00:06:59.176  ************************************
00:06:59.176  END TEST locking_app_on_unlocked_coremask
00:06:59.176  ************************************
00:06:59.176   10:44:47	-- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:06:59.176   10:44:47	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:06:59.176   10:44:47	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:06:59.176   10:44:47	-- common/autotest_common.sh@10 -- # set +x
00:06:59.176  ************************************
00:06:59.176  START TEST locking_app_on_locked_coremask
00:06:59.176  ************************************
00:06:59.176   10:44:47	-- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask
00:06:59.176   10:44:47	-- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2087770
00:06:59.176   10:44:47	-- event/cpu_locks.sh@116 -- # waitforlisten 2087770 /var/tmp/spdk.sock
00:06:59.176   10:44:47	-- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:06:59.176   10:44:47	-- common/autotest_common.sh@829 -- # '[' -z 2087770 ']'
00:06:59.176   10:44:47	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:59.176   10:44:47	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:59.176   10:44:47	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:59.176  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:59.176   10:44:47	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:59.176   10:44:47	-- common/autotest_common.sh@10 -- # set +x
00:06:59.176  [2024-12-15 10:44:48.049853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:06:59.176  [2024-12-15 10:44:48.049936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087770 ]
00:06:59.176  EAL: No free 2048 kB hugepages reported on node 1
00:06:59.176  [2024-12-15 10:44:48.154982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:59.435  [2024-12-15 10:44:48.249017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:06:59.435  [2024-12-15 10:44:48.249189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:06:59.435  [2024-12-15 10:44:48.441994] 'OCF_Core' volume operations registered
00:06:59.435  [2024-12-15 10:44:48.445478] 'OCF_Cache' volume operations registered
00:06:59.435  [2024-12-15 10:44:48.449450] 'OCF Composite' volume operations registered
00:06:59.694  [2024-12-15 10:44:48.452955] 'SPDK_block_device' volume operations registered
00:06:59.953   10:44:48	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:06:59.953   10:44:48	-- common/autotest_common.sh@862 -- # return 0
00:06:59.953   10:44:48	-- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:06:59.953   10:44:48	-- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2087875
00:06:59.953   10:44:48	-- event/cpu_locks.sh@120 -- # NOT waitforlisten 2087875 /var/tmp/spdk2.sock
00:06:59.953   10:44:48	-- common/autotest_common.sh@650 -- # local es=0
00:06:59.953   10:44:48	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2087875 /var/tmp/spdk2.sock
00:06:59.953   10:44:48	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:06:59.953   10:44:48	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:59.954    10:44:48	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:06:59.954   10:44:48	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:06:59.954   10:44:48	-- common/autotest_common.sh@653 -- # waitforlisten 2087875 /var/tmp/spdk2.sock
00:06:59.954   10:44:48	-- common/autotest_common.sh@829 -- # '[' -z 2087875 ']'
00:06:59.954   10:44:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:06:59.954   10:44:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:06:59.954   10:44:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:06:59.954  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:06:59.954   10:44:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:06:59.954   10:44:48	-- common/autotest_common.sh@10 -- # set +x
00:07:00.212  [2024-12-15 10:44:48.982118] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:00.212  [2024-12-15 10:44:48.982192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087875 ]
00:07:00.212  EAL: No free 2048 kB hugepages reported on node 1
00:07:00.212  [2024-12-15 10:44:49.122647] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2087770 has claimed it.
00:07:00.212  [2024-12-15 10:44:49.122706] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:00.803  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2087875) - No such process
00:07:00.803  ERROR: process (pid: 2087875) is no longer running
00:07:00.803   10:44:49	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:00.803   10:44:49	-- common/autotest_common.sh@862 -- # return 1
00:07:00.803   10:44:49	-- common/autotest_common.sh@653 -- # es=1
00:07:00.803   10:44:49	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:00.803   10:44:49	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:00.803   10:44:49	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:00.803   10:44:49	-- event/cpu_locks.sh@122 -- # locks_exist 2087770
00:07:00.803   10:44:49	-- event/cpu_locks.sh@22 -- # lslocks -p 2087770
00:07:00.803   10:44:49	-- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:07:02.183  lslocks: write error
00:07:02.183   10:44:50	-- event/cpu_locks.sh@124 -- # killprocess 2087770
00:07:02.183   10:44:50	-- common/autotest_common.sh@936 -- # '[' -z 2087770 ']'
00:07:02.183   10:44:50	-- common/autotest_common.sh@940 -- # kill -0 2087770
00:07:02.183    10:44:50	-- common/autotest_common.sh@941 -- # uname
00:07:02.183   10:44:50	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:02.183    10:44:50	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2087770
00:07:02.183   10:44:50	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:02.183   10:44:50	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:02.183   10:44:50	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2087770'
00:07:02.183  killing process with pid 2087770
00:07:02.183   10:44:50	-- common/autotest_common.sh@955 -- # kill 2087770
00:07:02.183   10:44:50	-- common/autotest_common.sh@960 -- # wait 2087770
00:07:02.442  
00:07:02.442  real	0m3.460s
00:07:02.442  user	0m3.748s
00:07:02.442  sys	0m1.302s
00:07:02.442   10:44:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:02.442   10:44:51	-- common/autotest_common.sh@10 -- # set +x
00:07:02.442  ************************************
00:07:02.442  END TEST locking_app_on_locked_coremask
00:07:02.442  ************************************
00:07:02.701   10:44:51	-- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:07:02.701   10:44:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:02.701   10:44:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:02.701   10:44:51	-- common/autotest_common.sh@10 -- # set +x
00:07:02.701  ************************************
00:07:02.701  START TEST locking_overlapped_coremask
00:07:02.701  ************************************
00:07:02.701   10:44:51	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask
00:07:02.701   10:44:51	-- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:07:02.701   10:44:51	-- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2088256
00:07:02.701   10:44:51	-- event/cpu_locks.sh@133 -- # waitforlisten 2088256 /var/tmp/spdk.sock
00:07:02.701   10:44:51	-- common/autotest_common.sh@829 -- # '[' -z 2088256 ']'
00:07:02.701   10:44:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:02.701   10:44:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:02.701   10:44:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:02.701  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:02.701   10:44:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:02.701   10:44:51	-- common/autotest_common.sh@10 -- # set +x
00:07:02.701  [2024-12-15 10:44:51.533219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:02.701  [2024-12-15 10:44:51.533272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088256 ]
00:07:02.701  EAL: No free 2048 kB hugepages reported on node 1
00:07:02.701  [2024-12-15 10:44:51.616123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:02.961  [2024-12-15 10:44:51.722481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:02.961  [2024-12-15 10:44:51.724644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:02.961  [2024-12-15 10:44:51.724666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:02.961  [2024-12-15 10:44:51.724669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:02.961  [2024-12-15 10:44:51.923502] 'OCF_Core' volume operations registered
00:07:02.961  [2024-12-15 10:44:51.926989] 'OCF_Cache' volume operations registered
00:07:02.961  [2024-12-15 10:44:51.930945] 'OCF Composite' volume operations registered
00:07:02.961  [2024-12-15 10:44:51.934445] 'SPDK_block_device' volume operations registered
00:07:03.530   10:44:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:03.530   10:44:52	-- common/autotest_common.sh@862 -- # return 0
00:07:03.530   10:44:52	-- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2088440
00:07:03.530   10:44:52	-- event/cpu_locks.sh@137 -- # NOT waitforlisten 2088440 /var/tmp/spdk2.sock
00:07:03.530   10:44:52	-- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:07:03.530   10:44:52	-- common/autotest_common.sh@650 -- # local es=0
00:07:03.530   10:44:52	-- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2088440 /var/tmp/spdk2.sock
00:07:03.530   10:44:52	-- common/autotest_common.sh@638 -- # local arg=waitforlisten
00:07:03.530   10:44:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:03.530    10:44:52	-- common/autotest_common.sh@642 -- # type -t waitforlisten
00:07:03.530   10:44:52	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:03.530   10:44:52	-- common/autotest_common.sh@653 -- # waitforlisten 2088440 /var/tmp/spdk2.sock
00:07:03.530   10:44:52	-- common/autotest_common.sh@829 -- # '[' -z 2088440 ']'
00:07:03.530   10:44:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:03.530   10:44:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:03.530   10:44:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:03.530  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:03.530   10:44:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:03.530   10:44:52	-- common/autotest_common.sh@10 -- # set +x
00:07:03.790  [2024-12-15 10:44:52.580726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:03.790  [2024-12-15 10:44:52.580804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088440 ]
00:07:03.790  EAL: No free 2048 kB hugepages reported on node 1
00:07:03.790  [2024-12-15 10:44:52.700075] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2088256 has claimed it.
00:07:03.790  [2024-12-15 10:44:52.700119] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:07:04.358  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (2088440) - No such process
00:07:04.358  ERROR: process (pid: 2088440) is no longer running
00:07:04.358   10:44:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:04.358   10:44:53	-- common/autotest_common.sh@862 -- # return 1
00:07:04.358   10:44:53	-- common/autotest_common.sh@653 -- # es=1
00:07:04.358   10:44:53	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:04.358   10:44:53	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:04.358   10:44:53	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:04.358   10:44:53	-- event/cpu_locks.sh@139 -- # check_remaining_locks
00:07:04.358   10:44:53	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:04.358   10:44:53	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:04.358   10:44:53	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:04.358   10:44:53	-- event/cpu_locks.sh@141 -- # killprocess 2088256
00:07:04.358   10:44:53	-- common/autotest_common.sh@936 -- # '[' -z 2088256 ']'
00:07:04.358   10:44:53	-- common/autotest_common.sh@940 -- # kill -0 2088256
00:07:04.358    10:44:53	-- common/autotest_common.sh@941 -- # uname
00:07:04.358   10:44:53	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:04.358    10:44:53	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2088256
00:07:04.358   10:44:53	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:04.358   10:44:53	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:04.358   10:44:53	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2088256'
00:07:04.358  killing process with pid 2088256
00:07:04.358   10:44:53	-- common/autotest_common.sh@955 -- # kill 2088256
00:07:04.358   10:44:53	-- common/autotest_common.sh@960 -- # wait 2088256
00:07:04.927  
00:07:04.927  real	0m2.430s
00:07:04.927  user	0m6.815s
00:07:04.927  sys	0m0.649s
00:07:04.927   10:44:53	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:04.927   10:44:53	-- common/autotest_common.sh@10 -- # set +x
00:07:04.927  ************************************
00:07:04.927  END TEST locking_overlapped_coremask
00:07:04.927  ************************************
00:07:05.190   10:44:53	-- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:07:05.190   10:44:53	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:05.190   10:44:53	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:05.190   10:44:53	-- common/autotest_common.sh@10 -- # set +x
00:07:05.190  ************************************
00:07:05.190  START TEST locking_overlapped_coremask_via_rpc
00:07:05.190  ************************************
00:07:05.190   10:44:53	-- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc
00:07:05.190   10:44:53	-- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2088653
00:07:05.190   10:44:53	-- event/cpu_locks.sh@149 -- # waitforlisten 2088653 /var/tmp/spdk.sock
00:07:05.190   10:44:53	-- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:07:05.190   10:44:53	-- common/autotest_common.sh@829 -- # '[' -z 2088653 ']'
00:07:05.190   10:44:53	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:05.190   10:44:53	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:05.190   10:44:53	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:05.190  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:05.190   10:44:53	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:05.190   10:44:53	-- common/autotest_common.sh@10 -- # set +x
00:07:05.190  [2024-12-15 10:44:54.034062] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:05.190  [2024-12-15 10:44:54.034134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088653 ]
00:07:05.190  EAL: No free 2048 kB hugepages reported on node 1
00:07:05.190  [2024-12-15 10:44:54.136765] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:05.190  [2024-12-15 10:44:54.136803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:05.514  [2024-12-15 10:44:54.244261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:05.514  [2024-12-15 10:44:54.244450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:07:05.514  [2024-12-15 10:44:54.244535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:05.514  [2024-12-15 10:44:54.244539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.514  [2024-12-15 10:44:54.443065] 'OCF_Core' volume operations registered
00:07:05.514  [2024-12-15 10:44:54.446543] 'OCF_Cache' volume operations registered
00:07:05.514  [2024-12-15 10:44:54.450472] 'OCF Composite' volume operations registered
00:07:05.514  [2024-12-15 10:44:54.453965] 'SPDK_block_device' volume operations registered
00:07:06.181   10:44:54	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:06.181   10:44:54	-- common/autotest_common.sh@862 -- # return 0
00:07:06.181   10:44:54	-- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2088835
00:07:06.181   10:44:54	-- event/cpu_locks.sh@153 -- # waitforlisten 2088835 /var/tmp/spdk2.sock
00:07:06.181   10:44:54	-- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:07:06.181   10:44:54	-- common/autotest_common.sh@829 -- # '[' -z 2088835 ']'
00:07:06.181   10:44:54	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:06.181   10:44:54	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:06.181   10:44:54	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:06.181  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:06.181   10:44:54	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:06.181   10:44:54	-- common/autotest_common.sh@10 -- # set +x
00:07:06.181  [2024-12-15 10:44:54.978676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:06.181  [2024-12-15 10:44:54.978736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2088835 ]
00:07:06.181  EAL: No free 2048 kB hugepages reported on node 1
00:07:06.181  [2024-12-15 10:44:55.074263] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:07:06.181  [2024-12-15 10:44:55.074293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:06.440  [2024-12-15 10:44:55.239496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:06.440  [2024-12-15 10:44:55.239665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:07:06.440  [2024-12-15 10:44:55.239775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:07:06.440  [2024-12-15 10:44:55.239777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:07:06.699  [2024-12-15 10:44:55.567604] 'OCF_Core' volume operations registered
00:07:06.699  [2024-12-15 10:44:55.574542] 'OCF_Cache' volume operations registered
00:07:06.699  [2024-12-15 10:44:55.581913] 'OCF Composite' volume operations registered
00:07:06.699  [2024-12-15 10:44:55.588850] 'SPDK_block_device' volume operations registered
00:07:06.959   10:44:55	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:06.959   10:44:55	-- common/autotest_common.sh@862 -- # return 0
00:07:06.959   10:44:55	-- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:07:06.959   10:44:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:06.959   10:44:55	-- common/autotest_common.sh@10 -- # set +x
00:07:06.959   10:44:55	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:06.959   10:44:55	-- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:06.959   10:44:55	-- common/autotest_common.sh@650 -- # local es=0
00:07:06.959   10:44:55	-- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:06.959   10:44:55	-- common/autotest_common.sh@638 -- # local arg=rpc_cmd
00:07:06.959   10:44:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:06.959    10:44:55	-- common/autotest_common.sh@642 -- # type -t rpc_cmd
00:07:06.959   10:44:55	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:06.959   10:44:55	-- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:07:06.959   10:44:55	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:06.959   10:44:55	-- common/autotest_common.sh@10 -- # set +x
00:07:07.218  [2024-12-15 10:44:55.976693] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2088653 has claimed it.
00:07:07.218  request:
00:07:07.218  {
00:07:07.218  "method": "framework_enable_cpumask_locks",
00:07:07.218  "req_id": 1
00:07:07.218  }
00:07:07.218  Got JSON-RPC error response
00:07:07.218  response:
00:07:07.218  {
00:07:07.218  "code": -32603,
00:07:07.218  "message": "Failed to claim CPU core: 2"
00:07:07.218  }
00:07:07.218   10:44:55	-- common/autotest_common.sh@589 -- # [[ 1 == 0 ]]
00:07:07.218   10:44:55	-- common/autotest_common.sh@653 -- # es=1
00:07:07.218   10:44:55	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:07.218   10:44:55	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:07.218   10:44:55	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:07.218   10:44:55	-- event/cpu_locks.sh@158 -- # waitforlisten 2088653 /var/tmp/spdk.sock
00:07:07.218   10:44:55	-- common/autotest_common.sh@829 -- # '[' -z 2088653 ']'
00:07:07.218   10:44:55	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:07.218   10:44:55	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:07.218   10:44:55	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:07.218  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:07.218   10:44:55	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:07.218   10:44:55	-- common/autotest_common.sh@10 -- # set +x
00:07:07.478   10:44:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:07.478   10:44:56	-- common/autotest_common.sh@862 -- # return 0
00:07:07.478   10:44:56	-- event/cpu_locks.sh@159 -- # waitforlisten 2088835 /var/tmp/spdk2.sock
00:07:07.478   10:44:56	-- common/autotest_common.sh@829 -- # '[' -z 2088835 ']'
00:07:07.478   10:44:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock
00:07:07.478   10:44:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:07.478   10:44:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:07:07.478  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:07:07.478   10:44:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:07.478   10:44:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.478   10:44:56	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:07.478   10:44:56	-- common/autotest_common.sh@862 -- # return 0
00:07:07.478   10:44:56	-- event/cpu_locks.sh@161 -- # check_remaining_locks
00:07:07.478   10:44:56	-- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:07:07.478   10:44:56	-- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:07:07.478   10:44:56	-- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:07:07.478  
00:07:07.478  real	0m2.470s
00:07:07.478  user	0m1.169s
00:07:07.478  sys	0m0.227s
00:07:07.478   10:44:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:07.478   10:44:56	-- common/autotest_common.sh@10 -- # set +x
00:07:07.478  ************************************
00:07:07.478  END TEST locking_overlapped_coremask_via_rpc
00:07:07.478  ************************************
00:07:07.478   10:44:56	-- event/cpu_locks.sh@174 -- # cleanup
00:07:07.478   10:44:56	-- event/cpu_locks.sh@15 -- # [[ -z 2088653 ]]
00:07:07.478   10:44:56	-- event/cpu_locks.sh@15 -- # killprocess 2088653
00:07:07.478   10:44:56	-- common/autotest_common.sh@936 -- # '[' -z 2088653 ']'
00:07:07.478   10:44:56	-- common/autotest_common.sh@940 -- # kill -0 2088653
00:07:07.478    10:44:56	-- common/autotest_common.sh@941 -- # uname
00:07:07.737   10:44:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:07.737    10:44:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2088653
00:07:07.737   10:44:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:07.737   10:44:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:07.737   10:44:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2088653'
00:07:07.737  killing process with pid 2088653
00:07:07.737   10:44:56	-- common/autotest_common.sh@955 -- # kill 2088653
00:07:07.737   10:44:56	-- common/autotest_common.sh@960 -- # wait 2088653
00:07:08.307   10:44:57	-- event/cpu_locks.sh@16 -- # [[ -z 2088835 ]]
00:07:08.307   10:44:57	-- event/cpu_locks.sh@16 -- # killprocess 2088835
00:07:08.307   10:44:57	-- common/autotest_common.sh@936 -- # '[' -z 2088835 ']'
00:07:08.307   10:44:57	-- common/autotest_common.sh@940 -- # kill -0 2088835
00:07:08.307    10:44:57	-- common/autotest_common.sh@941 -- # uname
00:07:08.307   10:44:57	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:08.307    10:44:57	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2088835
00:07:08.307   10:44:57	-- common/autotest_common.sh@942 -- # process_name=reactor_2
00:07:08.307   10:44:57	-- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']'
00:07:08.307   10:44:57	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2088835'
00:07:08.307  killing process with pid 2088835
00:07:08.307   10:44:57	-- common/autotest_common.sh@955 -- # kill 2088835
00:07:08.307   10:44:57	-- common/autotest_common.sh@960 -- # wait 2088835
00:07:08.876   10:44:57	-- event/cpu_locks.sh@18 -- # rm -f
00:07:08.876   10:44:57	-- event/cpu_locks.sh@1 -- # cleanup
00:07:08.876   10:44:57	-- event/cpu_locks.sh@15 -- # [[ -z 2088653 ]]
00:07:08.876   10:44:57	-- event/cpu_locks.sh@15 -- # killprocess 2088653
00:07:08.876   10:44:57	-- common/autotest_common.sh@936 -- # '[' -z 2088653 ']'
00:07:08.876   10:44:57	-- common/autotest_common.sh@940 -- # kill -0 2088653
00:07:08.876  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2088653) - No such process
00:07:08.876   10:44:57	-- common/autotest_common.sh@963 -- # echo 'Process with pid 2088653 is not found'
00:07:08.876  Process with pid 2088653 is not found
00:07:08.876   10:44:57	-- event/cpu_locks.sh@16 -- # [[ -z 2088835 ]]
00:07:08.876   10:44:57	-- event/cpu_locks.sh@16 -- # killprocess 2088835
00:07:08.876   10:44:57	-- common/autotest_common.sh@936 -- # '[' -z 2088835 ']'
00:07:08.876   10:44:57	-- common/autotest_common.sh@940 -- # kill -0 2088835
00:07:08.876  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2088835) - No such process
00:07:08.876   10:44:57	-- common/autotest_common.sh@963 -- # echo 'Process with pid 2088835 is not found'
00:07:08.876  Process with pid 2088835 is not found
00:07:08.876   10:44:57	-- event/cpu_locks.sh@18 -- # rm -f
00:07:08.876  
00:07:08.876  real	0m28.597s
00:07:08.876  user	0m44.918s
00:07:08.876  sys	0m10.295s
00:07:08.876   10:44:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:08.876   10:44:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.876  ************************************
00:07:08.876  END TEST cpu_locks
00:07:08.876  ************************************
00:07:08.876  
00:07:08.876  real	0m56.468s
00:07:08.876  user	1m40.407s
00:07:08.876  sys	0m15.171s
00:07:08.876   10:44:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:08.876   10:44:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.876  ************************************
00:07:08.876  END TEST event
00:07:08.876  ************************************
00:07:08.876   10:44:57	-- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh
00:07:08.876   10:44:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:08.876   10:44:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:08.876   10:44:57	-- common/autotest_common.sh@10 -- # set +x
00:07:08.876  ************************************
00:07:08.876  START TEST thread
00:07:08.876  ************************************
00:07:08.876   10:44:57	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/thread.sh
00:07:09.136  * Looking for test storage...
00:07:09.136  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread
00:07:09.136    10:44:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:09.136     10:44:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:09.136     10:44:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:09.136    10:44:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:09.136    10:44:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:09.136    10:44:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:09.136    10:44:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:09.136    10:44:57	-- scripts/common.sh@335 -- # IFS=.-:
00:07:09.136    10:44:57	-- scripts/common.sh@335 -- # read -ra ver1
00:07:09.136    10:44:57	-- scripts/common.sh@336 -- # IFS=.-:
00:07:09.136    10:44:57	-- scripts/common.sh@336 -- # read -ra ver2
00:07:09.136    10:44:57	-- scripts/common.sh@337 -- # local 'op=<'
00:07:09.136    10:44:57	-- scripts/common.sh@339 -- # ver1_l=2
00:07:09.136    10:44:57	-- scripts/common.sh@340 -- # ver2_l=1
00:07:09.136    10:44:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:09.136    10:44:57	-- scripts/common.sh@343 -- # case "$op" in
00:07:09.136    10:44:57	-- scripts/common.sh@344 -- # : 1
00:07:09.136    10:44:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:09.136    10:44:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:09.136     10:44:58	-- scripts/common.sh@364 -- # decimal 1
00:07:09.136     10:44:58	-- scripts/common.sh@352 -- # local d=1
00:07:09.136     10:44:58	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:09.136     10:44:58	-- scripts/common.sh@354 -- # echo 1
00:07:09.136    10:44:58	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:09.136     10:44:58	-- scripts/common.sh@365 -- # decimal 2
00:07:09.136     10:44:58	-- scripts/common.sh@352 -- # local d=2
00:07:09.136     10:44:58	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:09.136     10:44:58	-- scripts/common.sh@354 -- # echo 2
00:07:09.136    10:44:58	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:09.136    10:44:58	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:09.136    10:44:58	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:09.136    10:44:58	-- scripts/common.sh@367 -- # return 0
00:07:09.136    10:44:58	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:09.136    10:44:58	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:09.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.136  		--rc genhtml_branch_coverage=1
00:07:09.136  		--rc genhtml_function_coverage=1
00:07:09.136  		--rc genhtml_legend=1
00:07:09.136  		--rc geninfo_all_blocks=1
00:07:09.136  		--rc geninfo_unexecuted_blocks=1
00:07:09.136  		
00:07:09.136  		'
00:07:09.136    10:44:58	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:09.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.136  		--rc genhtml_branch_coverage=1
00:07:09.136  		--rc genhtml_function_coverage=1
00:07:09.136  		--rc genhtml_legend=1
00:07:09.136  		--rc geninfo_all_blocks=1
00:07:09.136  		--rc geninfo_unexecuted_blocks=1
00:07:09.136  		
00:07:09.136  		'
00:07:09.136    10:44:58	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:09.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.136  		--rc genhtml_branch_coverage=1
00:07:09.136  		--rc genhtml_function_coverage=1
00:07:09.136  		--rc genhtml_legend=1
00:07:09.136  		--rc geninfo_all_blocks=1
00:07:09.136  		--rc geninfo_unexecuted_blocks=1
00:07:09.136  		
00:07:09.136  		'
00:07:09.136    10:44:58	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:09.136  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:09.136  		--rc genhtml_branch_coverage=1
00:07:09.136  		--rc genhtml_function_coverage=1
00:07:09.136  		--rc genhtml_legend=1
00:07:09.136  		--rc geninfo_all_blocks=1
00:07:09.136  		--rc geninfo_unexecuted_blocks=1
00:07:09.136  		
00:07:09.136  		'
00:07:09.136   10:44:58	-- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:09.136   10:44:58	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:07:09.136   10:44:58	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:09.136   10:44:58	-- common/autotest_common.sh@10 -- # set +x
00:07:09.136  ************************************
00:07:09.136  START TEST thread_poller_perf
00:07:09.136  ************************************
00:07:09.136   10:44:58	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:07:09.136  [2024-12-15 10:44:58.046333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:09.136  [2024-12-15 10:44:58.046421] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089306 ]
00:07:09.136  EAL: No free 2048 kB hugepages reported on node 1
00:07:09.396  [2024-12-15 10:44:58.152567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:09.396  [2024-12-15 10:44:58.247423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:09.396  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:07:10.775  
[2024-12-15T09:44:59.791Z]  ======================================
00:07:10.775  
[2024-12-15T09:44:59.791Z]  busy:2311557856 (cyc)
00:07:10.775  
[2024-12-15T09:44:59.791Z]  total_run_count: 259000
00:07:10.775  
[2024-12-15T09:44:59.791Z]  tsc_hz: 2300000000 (cyc)
00:07:10.775  
[2024-12-15T09:44:59.791Z]  ======================================
00:07:10.775  
[2024-12-15T09:44:59.791Z]  poller_cost: 8924 (cyc), 3880 (nsec)
00:07:10.775  
00:07:10.775  real	0m1.344s
00:07:10.775  user	0m1.224s
00:07:10.775  sys	0m0.114s
00:07:10.775   10:44:59	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:10.775   10:44:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.775  ************************************
00:07:10.775  END TEST thread_poller_perf
00:07:10.775  ************************************
00:07:10.775   10:44:59	-- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:10.775   10:44:59	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:07:10.775   10:44:59	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:10.775   10:44:59	-- common/autotest_common.sh@10 -- # set +x
00:07:10.775  ************************************
00:07:10.775  START TEST thread_poller_perf
00:07:10.775  ************************************
00:07:10.775   10:44:59	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:07:10.775  [2024-12-15 10:44:59.443304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:10.775  [2024-12-15 10:44:59.443400] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089501 ]
00:07:10.775  EAL: No free 2048 kB hugepages reported on node 1
00:07:10.775  [2024-12-15 10:44:59.551643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:10.775  [2024-12-15 10:44:59.652575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:10.775  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:07:12.155  
[2024-12-15T09:45:01.171Z]  ======================================
00:07:12.155  
[2024-12-15T09:45:01.171Z]  busy:2303366332 (cyc)
00:07:12.155  
[2024-12-15T09:45:01.171Z]  total_run_count: 3478000
00:07:12.155  
[2024-12-15T09:45:01.171Z]  tsc_hz: 2300000000 (cyc)
00:07:12.155  
[2024-12-15T09:45:01.171Z]  ======================================
00:07:12.155  
[2024-12-15T09:45:01.171Z]  poller_cost: 662 (cyc), 287 (nsec)
00:07:12.155  
00:07:12.155  real	0m1.350s
00:07:12.155  user	0m1.221s
00:07:12.155  sys	0m0.122s
00:07:12.155   10:45:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.155   10:45:00	-- common/autotest_common.sh@10 -- # set +x
00:07:12.155  ************************************
00:07:12.155  END TEST thread_poller_perf
00:07:12.155  ************************************
00:07:12.155   10:45:00	-- thread/thread.sh@17 -- # [[ y != \y ]]
00:07:12.155  
00:07:12.155  real	0m2.992s
00:07:12.155  user	0m2.596s
00:07:12.155  sys	0m0.418s
00:07:12.155   10:45:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:12.155   10:45:00	-- common/autotest_common.sh@10 -- # set +x
00:07:12.155  ************************************
00:07:12.155  END TEST thread
00:07:12.155  ************************************
00:07:12.155   10:45:00	-- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh
00:07:12.155   10:45:00	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:07:12.155   10:45:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:12.155   10:45:00	-- common/autotest_common.sh@10 -- # set +x
00:07:12.155  ************************************
00:07:12.155  START TEST accel
00:07:12.155  ************************************
00:07:12.155   10:45:00	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel.sh
00:07:12.155  * Looking for test storage...
00:07:12.155  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel
00:07:12.155    10:45:00	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:07:12.155     10:45:00	-- common/autotest_common.sh@1690 -- # lcov --version
00:07:12.155     10:45:00	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:07:12.155    10:45:01	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:07:12.155    10:45:01	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:07:12.155    10:45:01	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:07:12.155    10:45:01	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:07:12.155    10:45:01	-- scripts/common.sh@335 -- # IFS=.-:
00:07:12.155    10:45:01	-- scripts/common.sh@335 -- # read -ra ver1
00:07:12.155    10:45:01	-- scripts/common.sh@336 -- # IFS=.-:
00:07:12.155    10:45:01	-- scripts/common.sh@336 -- # read -ra ver2
00:07:12.155    10:45:01	-- scripts/common.sh@337 -- # local 'op=<'
00:07:12.155    10:45:01	-- scripts/common.sh@339 -- # ver1_l=2
00:07:12.155    10:45:01	-- scripts/common.sh@340 -- # ver2_l=1
00:07:12.155    10:45:01	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:07:12.155    10:45:01	-- scripts/common.sh@343 -- # case "$op" in
00:07:12.155    10:45:01	-- scripts/common.sh@344 -- # : 1
00:07:12.155    10:45:01	-- scripts/common.sh@363 -- # (( v = 0 ))
00:07:12.155    10:45:01	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:12.155     10:45:01	-- scripts/common.sh@364 -- # decimal 1
00:07:12.155     10:45:01	-- scripts/common.sh@352 -- # local d=1
00:07:12.155     10:45:01	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:12.155     10:45:01	-- scripts/common.sh@354 -- # echo 1
00:07:12.155    10:45:01	-- scripts/common.sh@364 -- # ver1[v]=1
00:07:12.155     10:45:01	-- scripts/common.sh@365 -- # decimal 2
00:07:12.155     10:45:01	-- scripts/common.sh@352 -- # local d=2
00:07:12.155     10:45:01	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:12.155     10:45:01	-- scripts/common.sh@354 -- # echo 2
00:07:12.155    10:45:01	-- scripts/common.sh@365 -- # ver2[v]=2
00:07:12.155    10:45:01	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:07:12.155    10:45:01	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:07:12.155    10:45:01	-- scripts/common.sh@367 -- # return 0
00:07:12.155    10:45:01	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:12.155    10:45:01	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:07:12.155  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:12.155  		--rc genhtml_branch_coverage=1
00:07:12.155  		--rc genhtml_function_coverage=1
00:07:12.155  		--rc genhtml_legend=1
00:07:12.155  		--rc geninfo_all_blocks=1
00:07:12.155  		--rc geninfo_unexecuted_blocks=1
00:07:12.155  		
00:07:12.155  		'
00:07:12.156    10:45:01	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:07:12.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:12.156  		--rc genhtml_branch_coverage=1
00:07:12.156  		--rc genhtml_function_coverage=1
00:07:12.156  		--rc genhtml_legend=1
00:07:12.156  		--rc geninfo_all_blocks=1
00:07:12.156  		--rc geninfo_unexecuted_blocks=1
00:07:12.156  		
00:07:12.156  		'
00:07:12.156    10:45:01	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:07:12.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:12.156  		--rc genhtml_branch_coverage=1
00:07:12.156  		--rc genhtml_function_coverage=1
00:07:12.156  		--rc genhtml_legend=1
00:07:12.156  		--rc geninfo_all_blocks=1
00:07:12.156  		--rc geninfo_unexecuted_blocks=1
00:07:12.156  		
00:07:12.156  		'
00:07:12.156    10:45:01	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:07:12.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:12.156  		--rc genhtml_branch_coverage=1
00:07:12.156  		--rc genhtml_function_coverage=1
00:07:12.156  		--rc genhtml_legend=1
00:07:12.156  		--rc geninfo_all_blocks=1
00:07:12.156  		--rc geninfo_unexecuted_blocks=1
00:07:12.156  		
00:07:12.156  		'
00:07:12.156   10:45:01	-- accel/accel.sh@73 -- # declare -A expected_opcs
00:07:12.156   10:45:01	-- accel/accel.sh@74 -- # get_expected_opcs
00:07:12.156   10:45:01	-- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:07:12.156   10:45:01	-- accel/accel.sh@59 -- # spdk_tgt_pid=2089819
00:07:12.156   10:45:01	-- accel/accel.sh@60 -- # waitforlisten 2089819
00:07:12.156   10:45:01	-- common/autotest_common.sh@829 -- # '[' -z 2089819 ']'
00:07:12.156   10:45:01	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:12.156   10:45:01	-- common/autotest_common.sh@834 -- # local max_retries=100
00:07:12.156   10:45:01	-- accel/accel.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63
00:07:12.156   10:45:01	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:12.156  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:12.156    10:45:01	-- accel/accel.sh@58 -- # build_accel_config
00:07:12.156   10:45:01	-- common/autotest_common.sh@838 -- # xtrace_disable
00:07:12.156    10:45:01	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:12.156   10:45:01	-- common/autotest_common.sh@10 -- # set +x
00:07:12.156    10:45:01	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:12.156    10:45:01	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:12.156    10:45:01	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:12.156    10:45:01	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:12.156    10:45:01	-- accel/accel.sh@41 -- # local IFS=,
00:07:12.156    10:45:01	-- accel/accel.sh@42 -- # jq -r .
00:07:12.156  [2024-12-15 10:45:01.107802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:12.156  [2024-12-15 10:45:01.107869] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089819 ]
00:07:12.156  EAL: No free 2048 kB hugepages reported on node 1
00:07:12.415  [2024-12-15 10:45:01.198150] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:12.415  [2024-12-15 10:45:01.303716] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:07:12.415  [2024-12-15 10:45:01.303885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:12.674  [2024-12-15 10:45:01.505082] 'OCF_Core' volume operations registered
00:07:12.674  [2024-12-15 10:45:01.508421] 'OCF_Cache' volume operations registered
00:07:12.674  [2024-12-15 10:45:01.512207] 'OCF Composite' volume operations registered
00:07:12.674  [2024-12-15 10:45:01.515668] 'SPDK_block_device' volume operations registered
00:07:13.243   10:45:02	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:07:13.243   10:45:02	-- common/autotest_common.sh@862 -- # return 0
00:07:13.243   10:45:02	-- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]"))
00:07:13.243    10:45:02	-- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments
00:07:13.243    10:45:02	-- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]'
00:07:13.243    10:45:02	-- common/autotest_common.sh@561 -- # xtrace_disable
00:07:13.243    10:45:02	-- common/autotest_common.sh@10 -- # set +x
00:07:13.243    10:45:02	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}"
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # IFS==
00:07:13.243   10:45:02	-- accel/accel.sh@64 -- # read -r opc module
00:07:13.243   10:45:02	-- accel/accel.sh@65 -- # expected_opcs["$opc"]=software
00:07:13.243   10:45:02	-- accel/accel.sh@67 -- # killprocess 2089819
00:07:13.243   10:45:02	-- common/autotest_common.sh@936 -- # '[' -z 2089819 ']'
00:07:13.243   10:45:02	-- common/autotest_common.sh@940 -- # kill -0 2089819
00:07:13.243    10:45:02	-- common/autotest_common.sh@941 -- # uname
00:07:13.243   10:45:02	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:07:13.243    10:45:02	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2089819
00:07:13.243   10:45:02	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:07:13.243   10:45:02	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:07:13.243   10:45:02	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2089819'
00:07:13.243  killing process with pid 2089819
00:07:13.243   10:45:02	-- common/autotest_common.sh@955 -- # kill 2089819
00:07:13.243   10:45:02	-- common/autotest_common.sh@960 -- # wait 2089819
00:07:13.812   10:45:02	-- accel/accel.sh@68 -- # trap - ERR
00:07:13.812   10:45:02	-- accel/accel.sh@81 -- # run_test accel_help accel_perf -h
00:07:13.812   10:45:02	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:07:13.812   10:45:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:13.812   10:45:02	-- common/autotest_common.sh@10 -- # set +x
00:07:13.812   10:45:02	-- common/autotest_common.sh@1114 -- # accel_perf -h
00:07:13.812   10:45:02	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h
00:07:13.812    10:45:02	-- accel/accel.sh@12 -- # build_accel_config
00:07:13.812    10:45:02	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:13.812    10:45:02	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:13.812    10:45:02	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:13.812    10:45:02	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:13.812    10:45:02	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:13.812    10:45:02	-- accel/accel.sh@41 -- # local IFS=,
00:07:13.812    10:45:02	-- accel/accel.sh@42 -- # jq -r .
00:07:13.812   10:45:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:13.812   10:45:02	-- common/autotest_common.sh@10 -- # set +x
00:07:13.812   10:45:02	-- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress
00:07:13.812   10:45:02	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:13.812   10:45:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:13.812   10:45:02	-- common/autotest_common.sh@10 -- # set +x
00:07:14.072  ************************************
00:07:14.072  START TEST accel_missing_filename
00:07:14.072  ************************************
00:07:14.072   10:45:02	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress
00:07:14.072   10:45:02	-- common/autotest_common.sh@650 -- # local es=0
00:07:14.072   10:45:02	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress
00:07:14.072   10:45:02	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:07:14.072   10:45:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:14.072    10:45:02	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:07:14.072   10:45:02	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:14.072   10:45:02	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress
00:07:14.072   10:45:02	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress
00:07:14.072    10:45:02	-- accel/accel.sh@12 -- # build_accel_config
00:07:14.072    10:45:02	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:14.072    10:45:02	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:14.072    10:45:02	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:14.072    10:45:02	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:14.072    10:45:02	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:14.072    10:45:02	-- accel/accel.sh@41 -- # local IFS=,
00:07:14.072    10:45:02	-- accel/accel.sh@42 -- # jq -r .
00:07:14.072  [2024-12-15 10:45:02.861340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:14.072  [2024-12-15 10:45:02.861424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090163 ]
00:07:14.072  EAL: No free 2048 kB hugepages reported on node 1
00:07:14.072  [2024-12-15 10:45:02.954947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:14.072  [2024-12-15 10:45:03.052353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:14.332  [2024-12-15 10:45:03.104453] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:14.332  [2024-12-15 10:45:03.177220] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:07:14.332  A filename is required.
00:07:14.332   10:45:03	-- common/autotest_common.sh@653 -- # es=234
00:07:14.332   10:45:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:14.332   10:45:03	-- common/autotest_common.sh@662 -- # es=106
00:07:14.332   10:45:03	-- common/autotest_common.sh@663 -- # case "$es" in
00:07:14.332   10:45:03	-- common/autotest_common.sh@670 -- # es=1
00:07:14.332   10:45:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:14.332  
00:07:14.332  real	0m0.463s
00:07:14.332  user	0m0.330s
00:07:14.332  sys	0m0.171s
00:07:14.332   10:45:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:14.332   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:14.332  ************************************
00:07:14.332  END TEST accel_missing_filename
00:07:14.332  ************************************
00:07:14.332   10:45:03	-- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:14.332   10:45:03	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:07:14.332   10:45:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:14.332   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:14.332  ************************************
00:07:14.332  START TEST accel_compress_verify
00:07:14.332  ************************************
00:07:14.332   10:45:03	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:14.332   10:45:03	-- common/autotest_common.sh@650 -- # local es=0
00:07:14.332   10:45:03	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:14.332   10:45:03	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:07:14.332   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:14.332    10:45:03	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:07:14.332   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:14.332   10:45:03	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:14.332   10:45:03	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:14.591    10:45:03	-- accel/accel.sh@12 -- # build_accel_config
00:07:14.591    10:45:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:14.591    10:45:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:14.591    10:45:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:14.591    10:45:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:14.591    10:45:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:14.591    10:45:03	-- accel/accel.sh@41 -- # local IFS=,
00:07:14.591    10:45:03	-- accel/accel.sh@42 -- # jq -r .
00:07:14.591  [2024-12-15 10:45:03.375549] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:14.591  [2024-12-15 10:45:03.375630] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090278 ]
00:07:14.591  EAL: No free 2048 kB hugepages reported on node 1
00:07:14.591  [2024-12-15 10:45:03.480789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:14.591  [2024-12-15 10:45:03.578675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:14.851  [2024-12-15 10:45:03.630638] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:14.851  [2024-12-15 10:45:03.703106] accel_perf.c:1385:main: *ERROR*: ERROR starting application
00:07:14.851  
00:07:14.851  Compression does not support the verify option, aborting.
00:07:14.851   10:45:03	-- common/autotest_common.sh@653 -- # es=161
00:07:14.851   10:45:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:14.851   10:45:03	-- common/autotest_common.sh@662 -- # es=33
00:07:14.851   10:45:03	-- common/autotest_common.sh@663 -- # case "$es" in
00:07:14.851   10:45:03	-- common/autotest_common.sh@670 -- # es=1
00:07:14.851   10:45:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:14.851  
00:07:14.851  real	0m0.476s
00:07:14.851  user	0m0.342s
00:07:14.851  sys	0m0.173s
00:07:14.851   10:45:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:14.851   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:14.851  ************************************
00:07:14.851  END TEST accel_compress_verify
00:07:14.851  ************************************
00:07:14.851   10:45:03	-- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar
00:07:14.851   10:45:03	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:14.851   10:45:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:14.851   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:15.110  ************************************
00:07:15.110  START TEST accel_wrong_workload
00:07:15.110  ************************************
00:07:15.110   10:45:03	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar
00:07:15.110   10:45:03	-- common/autotest_common.sh@650 -- # local es=0
00:07:15.110   10:45:03	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar
00:07:15.110   10:45:03	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:07:15.110   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:15.110    10:45:03	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:07:15.110   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:15.110   10:45:03	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar
00:07:15.110   10:45:03	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar
00:07:15.110    10:45:03	-- accel/accel.sh@12 -- # build_accel_config
00:07:15.110    10:45:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:15.110    10:45:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:15.110    10:45:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:15.110    10:45:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:15.110    10:45:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:15.110    10:45:03	-- accel/accel.sh@41 -- # local IFS=,
00:07:15.110    10:45:03	-- accel/accel.sh@42 -- # jq -r .
00:07:15.110  Unsupported workload type: foobar
00:07:15.110  [2024-12-15 10:45:03.898742] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1
00:07:15.110  accel_perf options:
00:07:15.110  	[-h help message]
00:07:15.110  	[-q queue depth per core]
00:07:15.110  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:07:15.110  	[-T number of threads per core
00:07:15.110  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:07:15.110  	[-t time in seconds]
00:07:15.110  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:07:15.110  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:07:15.110  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:07:15.110  	[-l for compress/decompress workloads, name of uncompressed input file
00:07:15.110  	[-S for crc32c workload, use this seed value (default 0)
00:07:15.110  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:07:15.110  	[-f for fill workload, use this BYTE value (default 255)
00:07:15.110  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:07:15.110  	[-y verify result if this switch is on]
00:07:15.110  	[-a tasks to allocate per core (default: same value as -q)]
00:07:15.110  		Can be used to spread operations across a wider range of memory.
00:07:15.110   10:45:03	-- common/autotest_common.sh@653 -- # es=1
00:07:15.111   10:45:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:15.111   10:45:03	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:15.111   10:45:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:15.111  
00:07:15.111  real	0m0.038s
00:07:15.111  user	0m0.024s
00:07:15.111  sys	0m0.014s
00:07:15.111   10:45:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.111   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:15.111  ************************************
00:07:15.111  END TEST accel_wrong_workload
00:07:15.111  ************************************
00:07:15.111  Error: writing output failed: Broken pipe
00:07:15.111   10:45:03	-- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1
00:07:15.111   10:45:03	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:07:15.111   10:45:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.111   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:15.111  ************************************
00:07:15.111  START TEST accel_negative_buffers
00:07:15.111  ************************************
00:07:15.111   10:45:03	-- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1
00:07:15.111   10:45:03	-- common/autotest_common.sh@650 -- # local es=0
00:07:15.111   10:45:03	-- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1
00:07:15.111   10:45:03	-- common/autotest_common.sh@638 -- # local arg=accel_perf
00:07:15.111   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:15.111    10:45:03	-- common/autotest_common.sh@642 -- # type -t accel_perf
00:07:15.111   10:45:03	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:07:15.111   10:45:03	-- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1
00:07:15.111   10:45:03	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1
00:07:15.111    10:45:03	-- accel/accel.sh@12 -- # build_accel_config
00:07:15.111    10:45:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:15.111    10:45:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:15.111    10:45:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:15.111    10:45:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:15.111    10:45:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:15.111    10:45:03	-- accel/accel.sh@41 -- # local IFS=,
00:07:15.111    10:45:03	-- accel/accel.sh@42 -- # jq -r .
00:07:15.111  -x option must be non-negative.
00:07:15.111  [2024-12-15 10:45:03.984561] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1
00:07:15.111  accel_perf options:
00:07:15.111  	[-h help message]
00:07:15.111  	[-q queue depth per core]
00:07:15.111  	[-C for supported workloads, use this value to configure the io vector size to test (default 1)
00:07:15.111  	[-T number of threads per core
00:07:15.111  	[-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)]
00:07:15.111  	[-t time in seconds]
00:07:15.111  	[-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor,
00:07:15.111  	[                                       dif_verify, , dif_generate, dif_generate_copy
00:07:15.111  	[-M assign module to the operation, not compatible with accel_assign_opc RPC
00:07:15.111  	[-l for compress/decompress workloads, name of uncompressed input file
00:07:15.111  	[-S for crc32c workload, use this seed value (default 0)
00:07:15.111  	[-P for compare workload, percentage of operations that should miscompare (percent, default 0)
00:07:15.111  	[-f for fill workload, use this BYTE value (default 255)
00:07:15.111  	[-x for xor workload, use this number of source buffers (default, minimum: 2)]
00:07:15.111  	[-y verify result if this switch is on]
00:07:15.111  	[-a tasks to allocate per core (default: same value as -q)]
00:07:15.111  		Can be used to spread operations across a wider range of memory.
00:07:15.111   10:45:03	-- common/autotest_common.sh@653 -- # es=1
00:07:15.111   10:45:03	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:07:15.111   10:45:03	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:07:15.111   10:45:03	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:07:15.111  
00:07:15.111  real	0m0.039s
00:07:15.111  user	0m0.019s
00:07:15.111  sys	0m0.019s
00:07:15.111   10:45:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:15.111   10:45:03	-- common/autotest_common.sh@10 -- # set +x
00:07:15.111  ************************************
00:07:15.111  END TEST accel_negative_buffers
00:07:15.111  ************************************
00:07:15.111  Error: writing output failed: Broken pipe
00:07:15.111   10:45:04	-- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y
00:07:15.111   10:45:04	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:07:15.111   10:45:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:15.111   10:45:04	-- common/autotest_common.sh@10 -- # set +x
00:07:15.111  ************************************
00:07:15.111  START TEST accel_crc32c
00:07:15.111  ************************************
00:07:15.111   10:45:04	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y
00:07:15.111   10:45:04	-- accel/accel.sh@16 -- # local accel_opc
00:07:15.111   10:45:04	-- accel/accel.sh@17 -- # local accel_module
00:07:15.111    10:45:04	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:07:15.111    10:45:04	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:07:15.111     10:45:04	-- accel/accel.sh@12 -- # build_accel_config
00:07:15.111     10:45:04	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:15.111     10:45:04	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:15.111     10:45:04	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:15.111     10:45:04	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:15.111     10:45:04	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:15.111     10:45:04	-- accel/accel.sh@41 -- # local IFS=,
00:07:15.111     10:45:04	-- accel/accel.sh@42 -- # jq -r .
00:07:15.111  [2024-12-15 10:45:04.068868] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:15.111  [2024-12-15 10:45:04.068941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090355 ]
00:07:15.111  EAL: No free 2048 kB hugepages reported on node 1
00:07:15.370  [2024-12-15 10:45:04.174582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:15.370  [2024-12-15 10:45:04.275465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:16.748   10:45:05	-- accel/accel.sh@18 -- # out='
00:07:16.748  SPDK Configuration:
00:07:16.748  Core mask:      0x1
00:07:16.748  
00:07:16.748  Accel Perf Configuration:
00:07:16.748  Workload Type:  crc32c
00:07:16.748  CRC-32C seed:   32
00:07:16.748  Transfer size:  4096 bytes
00:07:16.748  Vector count    1
00:07:16.748  Module:         software
00:07:16.748  Queue depth:    32
00:07:16.748  Allocate depth: 32
00:07:16.748  # threads/core: 1
00:07:16.748  Run time:       1 seconds
00:07:16.748  Verify:         Yes
00:07:16.748  
00:07:16.748  Running for 1 seconds...
00:07:16.748  
00:07:16.748  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:16.748  ------------------------------------------------------------------------------------
00:07:16.748  0,0                      372288/s       1454 MiB/s                0                0
00:07:16.748  ====================================================================================
00:07:16.748  Total                    372288/s       1454 MiB/s                0                0'
00:07:16.748   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:16.748   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:16.748    10:45:05	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y
00:07:16.748    10:45:05	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y
00:07:16.748     10:45:05	-- accel/accel.sh@12 -- # build_accel_config
00:07:16.748     10:45:05	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:16.748     10:45:05	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:16.748     10:45:05	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:16.748     10:45:05	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:16.748     10:45:05	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:16.748     10:45:05	-- accel/accel.sh@41 -- # local IFS=,
00:07:16.748     10:45:05	-- accel/accel.sh@42 -- # jq -r .
00:07:16.749  [2024-12-15 10:45:05.546913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:16.749  [2024-12-15 10:45:05.546988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2090910 ]
00:07:16.749  EAL: No free 2048 kB hugepages reported on node 1
00:07:16.749  [2024-12-15 10:45:05.654317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:16.749  [2024-12-15 10:45:05.752024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:17.007   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.007   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.007   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.007   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.007   10:45:05	-- accel/accel.sh@21 -- # val=0x1
00:07:17.007   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.007   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.007   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.007   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.007   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.007   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=crc32c
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=32
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=software
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@23 -- # accel_module=software
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=32
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=32
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=1
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=Yes
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:17.008   10:45:05	-- accel/accel.sh@21 -- # val=
00:07:17.008   10:45:05	-- accel/accel.sh@22 -- # case "$var" in
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # IFS=:
00:07:17.008   10:45:05	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@21 -- # val=
00:07:18.387   10:45:06	-- accel/accel.sh@22 -- # case "$var" in
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # IFS=:
00:07:18.387   10:45:06	-- accel/accel.sh@20 -- # read -r var val
00:07:18.387   10:45:06	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:18.387   10:45:06	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:07:18.387   10:45:06	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:18.387  
00:07:18.387  real	0m2.953s
00:07:18.387  user	0m2.612s
00:07:18.387  sys	0m0.342s
00:07:18.387   10:45:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:18.387   10:45:06	-- common/autotest_common.sh@10 -- # set +x
00:07:18.387  ************************************
00:07:18.387  END TEST accel_crc32c
00:07:18.387  ************************************
00:07:18.387   10:45:07	-- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2
00:07:18.387   10:45:07	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:07:18.387   10:45:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:18.387   10:45:07	-- common/autotest_common.sh@10 -- # set +x
00:07:18.387  ************************************
00:07:18.387  START TEST accel_crc32c_C2
00:07:18.387  ************************************
00:07:18.387   10:45:07	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2
00:07:18.387   10:45:07	-- accel/accel.sh@16 -- # local accel_opc
00:07:18.387   10:45:07	-- accel/accel.sh@17 -- # local accel_module
00:07:18.387    10:45:07	-- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2
00:07:18.387    10:45:07	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:07:18.387     10:45:07	-- accel/accel.sh@12 -- # build_accel_config
00:07:18.387     10:45:07	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:18.387     10:45:07	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:18.387     10:45:07	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:18.387     10:45:07	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:18.387     10:45:07	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:18.387     10:45:07	-- accel/accel.sh@41 -- # local IFS=,
00:07:18.388     10:45:07	-- accel/accel.sh@42 -- # jq -r .
00:07:18.388  [2024-12-15 10:45:07.069253] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:18.388  [2024-12-15 10:45:07.069331] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091270 ]
00:07:18.388  EAL: No free 2048 kB hugepages reported on node 1
00:07:18.388  [2024-12-15 10:45:07.176158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:18.388  [2024-12-15 10:45:07.275253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:19.767   10:45:08	-- accel/accel.sh@18 -- # out='
00:07:19.767  SPDK Configuration:
00:07:19.767  Core mask:      0x1
00:07:19.767  
00:07:19.767  Accel Perf Configuration:
00:07:19.767  Workload Type:  crc32c
00:07:19.767  CRC-32C seed:   0
00:07:19.767  Transfer size:  4096 bytes
00:07:19.767  Vector count    2
00:07:19.767  Module:         software
00:07:19.767  Queue depth:    32
00:07:19.767  Allocate depth: 32
00:07:19.767  # threads/core: 1
00:07:19.767  Run time:       1 seconds
00:07:19.767  Verify:         Yes
00:07:19.767  
00:07:19.767  Running for 1 seconds...
00:07:19.767  
00:07:19.767  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:19.767  ------------------------------------------------------------------------------------
00:07:19.767  0,0                      294144/s       2298 MiB/s                0                0
00:07:19.767  ====================================================================================
00:07:19.767  Total                    294144/s       1149 MiB/s                0                0'
00:07:19.767   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:19.767   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:19.767    10:45:08	-- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2
00:07:19.768    10:45:08	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2
00:07:19.768     10:45:08	-- accel/accel.sh@12 -- # build_accel_config
00:07:19.768     10:45:08	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:19.768     10:45:08	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:19.768     10:45:08	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:19.768     10:45:08	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:19.768     10:45:08	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:19.768     10:45:08	-- accel/accel.sh@41 -- # local IFS=,
00:07:19.768     10:45:08	-- accel/accel.sh@42 -- # jq -r .
00:07:19.768  [2024-12-15 10:45:08.545400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:19.768  [2024-12-15 10:45:08.545467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091449 ]
00:07:19.768  EAL: No free 2048 kB hugepages reported on node 1
00:07:19.768  [2024-12-15 10:45:08.650896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:19.768  [2024-12-15 10:45:08.749047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=0x1
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=crc32c
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@24 -- # accel_opc=crc32c
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=0
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.027   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.027   10:45:08	-- accel/accel.sh@21 -- # val=software
00:07:20.027   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.027   10:45:08	-- accel/accel.sh@23 -- # accel_module=software
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=32
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=32
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=1
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=Yes
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:20.028   10:45:08	-- accel/accel.sh@21 -- # val=
00:07:20.028   10:45:08	-- accel/accel.sh@22 -- # case "$var" in
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # IFS=:
00:07:20.028   10:45:08	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@21 -- # val=
00:07:21.405   10:45:09	-- accel/accel.sh@22 -- # case "$var" in
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # IFS=:
00:07:21.405   10:45:09	-- accel/accel.sh@20 -- # read -r var val
00:07:21.405   10:45:09	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:21.405   10:45:09	-- accel/accel.sh@28 -- # [[ -n crc32c ]]
00:07:21.405   10:45:09	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:21.405  
00:07:21.405  real	0m2.957s
00:07:21.405  user	0m2.611s
00:07:21.405  sys	0m0.350s
00:07:21.405   10:45:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:21.405   10:45:09	-- common/autotest_common.sh@10 -- # set +x
00:07:21.405  ************************************
00:07:21.405  END TEST accel_crc32c_C2
00:07:21.405  ************************************
00:07:21.405   10:45:10	-- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y
00:07:21.405   10:45:10	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:21.405   10:45:10	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:21.405   10:45:10	-- common/autotest_common.sh@10 -- # set +x
00:07:21.405  ************************************
00:07:21.405  START TEST accel_copy
00:07:21.405  ************************************
00:07:21.405   10:45:10	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y
00:07:21.405   10:45:10	-- accel/accel.sh@16 -- # local accel_opc
00:07:21.405   10:45:10	-- accel/accel.sh@17 -- # local accel_module
00:07:21.405    10:45:10	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y
00:07:21.405    10:45:10	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:07:21.405     10:45:10	-- accel/accel.sh@12 -- # build_accel_config
00:07:21.405     10:45:10	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:21.405     10:45:10	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:21.405     10:45:10	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:21.405     10:45:10	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:21.405     10:45:10	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:21.405     10:45:10	-- accel/accel.sh@41 -- # local IFS=,
00:07:21.405     10:45:10	-- accel/accel.sh@42 -- # jq -r .
00:07:21.405  [2024-12-15 10:45:10.073237] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:21.405  [2024-12-15 10:45:10.073315] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091647 ]
00:07:21.405  EAL: No free 2048 kB hugepages reported on node 1
00:07:21.405  [2024-12-15 10:45:10.177283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:21.405  [2024-12-15 10:45:10.274917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:22.784   10:45:11	-- accel/accel.sh@18 -- # out='
00:07:22.784  SPDK Configuration:
00:07:22.784  Core mask:      0x1
00:07:22.784  
00:07:22.784  Accel Perf Configuration:
00:07:22.784  Workload Type:  copy
00:07:22.784  Transfer size:  4096 bytes
00:07:22.784  Vector count    1
00:07:22.784  Module:         software
00:07:22.784  Queue depth:    32
00:07:22.784  Allocate depth: 32
00:07:22.784  # threads/core: 1
00:07:22.784  Run time:       1 seconds
00:07:22.784  Verify:         Yes
00:07:22.784  
00:07:22.784  Running for 1 seconds...
00:07:22.784  
00:07:22.784  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:22.785  ------------------------------------------------------------------------------------
00:07:22.785  0,0                      276960/s       1081 MiB/s                0                0
00:07:22.785  ====================================================================================
00:07:22.785  Total                    276960/s       1081 MiB/s                0                0'
00:07:22.785   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:22.785   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:22.785    10:45:11	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y
00:07:22.785     10:45:11	-- accel/accel.sh@12 -- # build_accel_config
00:07:22.785    10:45:11	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y
00:07:22.785     10:45:11	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:22.785     10:45:11	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:22.785     10:45:11	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:22.785     10:45:11	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:22.785     10:45:11	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:22.785     10:45:11	-- accel/accel.sh@41 -- # local IFS=,
00:07:22.785     10:45:11	-- accel/accel.sh@42 -- # jq -r .
00:07:22.785  [2024-12-15 10:45:11.548083] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:22.785  [2024-12-15 10:45:11.548152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2091831 ]
00:07:22.785  EAL: No free 2048 kB hugepages reported on node 1
00:07:22.785  [2024-12-15 10:45:11.652712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:22.785  [2024-12-15 10:45:11.752824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:22.785   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:22.785   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:22.785   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:22.785   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:22.785   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=0x1
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=copy
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@24 -- # accel_opc=copy
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=software
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@23 -- # accel_module=software
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=32
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=32
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=1
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=Yes
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.044   10:45:11	-- accel/accel.sh@21 -- # val=
00:07:23.044   10:45:11	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # IFS=:
00:07:23.044   10:45:11	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@21 -- # val=
00:07:23.982   10:45:12	-- accel/accel.sh@22 -- # case "$var" in
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # IFS=:
00:07:23.982   10:45:12	-- accel/accel.sh@20 -- # read -r var val
00:07:23.982   10:45:12	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:23.982   10:45:12	-- accel/accel.sh@28 -- # [[ -n copy ]]
00:07:23.982   10:45:12	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:23.982  
00:07:23.982  real	0m2.951s
00:07:23.982  user	0m2.624s
00:07:23.982  sys	0m0.330s
00:07:23.982   10:45:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:23.982   10:45:12	-- common/autotest_common.sh@10 -- # set +x
00:07:23.982  ************************************
00:07:23.982  END TEST accel_copy
00:07:23.982  ************************************
00:07:24.241   10:45:13	-- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:24.241   10:45:13	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:07:24.241   10:45:13	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:24.241   10:45:13	-- common/autotest_common.sh@10 -- # set +x
00:07:24.241  ************************************
00:07:24.241  START TEST accel_fill
00:07:24.241  ************************************
00:07:24.241   10:45:13	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:24.241   10:45:13	-- accel/accel.sh@16 -- # local accel_opc
00:07:24.241   10:45:13	-- accel/accel.sh@17 -- # local accel_module
00:07:24.241    10:45:13	-- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:24.241    10:45:13	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:24.241     10:45:13	-- accel/accel.sh@12 -- # build_accel_config
00:07:24.241     10:45:13	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:24.241     10:45:13	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:24.242     10:45:13	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:24.242     10:45:13	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:24.242     10:45:13	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:24.242     10:45:13	-- accel/accel.sh@41 -- # local IFS=,
00:07:24.242     10:45:13	-- accel/accel.sh@42 -- # jq -r .
00:07:24.242  [2024-12-15 10:45:13.070991] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:24.242  [2024-12-15 10:45:13.071058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092032 ]
00:07:24.242  EAL: No free 2048 kB hugepages reported on node 1
00:07:24.242  [2024-12-15 10:45:13.173076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:24.501  [2024-12-15 10:45:13.267961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:25.880   10:45:14	-- accel/accel.sh@18 -- # out='
00:07:25.880  SPDK Configuration:
00:07:25.880  Core mask:      0x1
00:07:25.880  
00:07:25.880  Accel Perf Configuration:
00:07:25.880  Workload Type:  fill
00:07:25.880  Fill pattern:   0x80
00:07:25.880  Transfer size:  4096 bytes
00:07:25.880  Vector count    1
00:07:25.880  Module:         software
00:07:25.880  Queue depth:    64
00:07:25.880  Allocate depth: 64
00:07:25.880  # threads/core: 1
00:07:25.880  Run time:       1 seconds
00:07:25.880  Verify:         Yes
00:07:25.880  
00:07:25.880  Running for 1 seconds...
00:07:25.880  
00:07:25.880  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:25.880  ------------------------------------------------------------------------------------
00:07:25.880  0,0                      430464/s       1681 MiB/s                0                0
00:07:25.880  ====================================================================================
00:07:25.880  Total                    430464/s       1681 MiB/s                0                0'
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880    10:45:14	-- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:25.880     10:45:14	-- accel/accel.sh@12 -- # build_accel_config
00:07:25.880    10:45:14	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y
00:07:25.880     10:45:14	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:25.880     10:45:14	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:25.880     10:45:14	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:25.880     10:45:14	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:25.880     10:45:14	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:25.880     10:45:14	-- accel/accel.sh@41 -- # local IFS=,
00:07:25.880     10:45:14	-- accel/accel.sh@42 -- # jq -r .
00:07:25.880  [2024-12-15 10:45:14.522442] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:25.880  [2024-12-15 10:45:14.522511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092220 ]
00:07:25.880  EAL: No free 2048 kB hugepages reported on node 1
00:07:25.880  [2024-12-15 10:45:14.624748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:25.880  [2024-12-15 10:45:14.721190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=0x1
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=fill
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@24 -- # accel_opc=fill
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=0x80
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=software
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@23 -- # accel_module=software
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=64
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=64
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=1
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=Yes
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:25.880   10:45:14	-- accel/accel.sh@21 -- # val=
00:07:25.880   10:45:14	-- accel/accel.sh@22 -- # case "$var" in
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # IFS=:
00:07:25.880   10:45:14	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@21 -- # val=
00:07:27.258   10:45:15	-- accel/accel.sh@22 -- # case "$var" in
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # IFS=:
00:07:27.258   10:45:15	-- accel/accel.sh@20 -- # read -r var val
00:07:27.258   10:45:15	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:27.258   10:45:15	-- accel/accel.sh@28 -- # [[ -n fill ]]
00:07:27.258   10:45:15	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:27.258  
00:07:27.258  real	0m2.920s
00:07:27.258  user	0m2.596s
00:07:27.258  sys	0m0.328s
00:07:27.258   10:45:15	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:27.258   10:45:15	-- common/autotest_common.sh@10 -- # set +x
00:07:27.258  ************************************
00:07:27.258  END TEST accel_fill
00:07:27.258  ************************************
00:07:27.258   10:45:15	-- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y
00:07:27.258   10:45:15	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:27.258   10:45:15	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:27.258   10:45:15	-- common/autotest_common.sh@10 -- # set +x
00:07:27.258  ************************************
00:07:27.258  START TEST accel_copy_crc32c
00:07:27.258  ************************************
00:07:27.258   10:45:16	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y
00:07:27.258   10:45:16	-- accel/accel.sh@16 -- # local accel_opc
00:07:27.258   10:45:16	-- accel/accel.sh@17 -- # local accel_module
00:07:27.259    10:45:16	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y
00:07:27.259    10:45:16	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:07:27.259     10:45:16	-- accel/accel.sh@12 -- # build_accel_config
00:07:27.259     10:45:16	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:27.259     10:45:16	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:27.259     10:45:16	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:27.259     10:45:16	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:27.259     10:45:16	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:27.259     10:45:16	-- accel/accel.sh@41 -- # local IFS=,
00:07:27.259     10:45:16	-- accel/accel.sh@42 -- # jq -r .
00:07:27.259  [2024-12-15 10:45:16.035885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:27.259  [2024-12-15 10:45:16.035955] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092415 ]
00:07:27.259  EAL: No free 2048 kB hugepages reported on node 1
00:07:27.259  [2024-12-15 10:45:16.137551] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:27.259  [2024-12-15 10:45:16.235088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.637   10:45:17	-- accel/accel.sh@18 -- # out='
00:07:28.637  SPDK Configuration:
00:07:28.637  Core mask:      0x1
00:07:28.637  
00:07:28.637  Accel Perf Configuration:
00:07:28.637  Workload Type:  copy_crc32c
00:07:28.637  CRC-32C seed:   0
00:07:28.637  Vector size:    4096 bytes
00:07:28.637  Transfer size:  4096 bytes
00:07:28.637  Vector count    1
00:07:28.637  Module:         software
00:07:28.637  Queue depth:    32
00:07:28.637  Allocate depth: 32
00:07:28.637  # threads/core: 1
00:07:28.637  Run time:       1 seconds
00:07:28.637  Verify:         Yes
00:07:28.637  
00:07:28.637  Running for 1 seconds...
00:07:28.637  
00:07:28.637  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:28.637  ------------------------------------------------------------------------------------
00:07:28.637  0,0                      212640/s        830 MiB/s                0                0
00:07:28.637  ====================================================================================
00:07:28.637  Total                    212640/s        830 MiB/s                0                0'
00:07:28.637   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.637    10:45:17	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y
00:07:28.637   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.637     10:45:17	-- accel/accel.sh@12 -- # build_accel_config
00:07:28.637    10:45:17	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y
00:07:28.637     10:45:17	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:28.637     10:45:17	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:28.637     10:45:17	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:28.637     10:45:17	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:28.637     10:45:17	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:28.637     10:45:17	-- accel/accel.sh@41 -- # local IFS=,
00:07:28.637     10:45:17	-- accel/accel.sh@42 -- # jq -r .
00:07:28.637  [2024-12-15 10:45:17.510509] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:28.637  [2024-12-15 10:45:17.510578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092601 ]
00:07:28.637  EAL: No free 2048 kB hugepages reported on node 1
00:07:28.637  [2024-12-15 10:45:17.618439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:28.897  [2024-12-15 10:45:17.716468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=0x1
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=copy_crc32c
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=0
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=software
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@23 -- # accel_module=software
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=32
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=32
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=1
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=Yes
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:28.897   10:45:17	-- accel/accel.sh@21 -- # val=
00:07:28.897   10:45:17	-- accel/accel.sh@22 -- # case "$var" in
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # IFS=:
00:07:28.897   10:45:17	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@21 -- # val=
00:07:30.276   10:45:18	-- accel/accel.sh@22 -- # case "$var" in
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # IFS=:
00:07:30.276   10:45:18	-- accel/accel.sh@20 -- # read -r var val
00:07:30.276   10:45:18	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:30.276   10:45:18	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:07:30.276   10:45:18	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:30.276  
00:07:30.276  real	0m2.959s
00:07:30.276  user	0m2.622s
00:07:30.276  sys	0m0.341s
00:07:30.276   10:45:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:30.276   10:45:18	-- common/autotest_common.sh@10 -- # set +x
00:07:30.276  ************************************
00:07:30.276  END TEST accel_copy_crc32c
00:07:30.276  ************************************
00:07:30.276   10:45:19	-- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2
00:07:30.276   10:45:19	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:07:30.276   10:45:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:30.276   10:45:19	-- common/autotest_common.sh@10 -- # set +x
00:07:30.276  ************************************
00:07:30.276  START TEST accel_copy_crc32c_C2
00:07:30.276  ************************************
00:07:30.276   10:45:19	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2
00:07:30.276   10:45:19	-- accel/accel.sh@16 -- # local accel_opc
00:07:30.276   10:45:19	-- accel/accel.sh@17 -- # local accel_module
00:07:30.276    10:45:19	-- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:07:30.276    10:45:19	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:07:30.276     10:45:19	-- accel/accel.sh@12 -- # build_accel_config
00:07:30.276     10:45:19	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:30.276     10:45:19	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:30.276     10:45:19	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:30.276     10:45:19	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:30.276     10:45:19	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:30.276     10:45:19	-- accel/accel.sh@41 -- # local IFS=,
00:07:30.276     10:45:19	-- accel/accel.sh@42 -- # jq -r .
00:07:30.276  [2024-12-15 10:45:19.042323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:30.276  [2024-12-15 10:45:19.042390] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2092850 ]
00:07:30.276  EAL: No free 2048 kB hugepages reported on node 1
00:07:30.276  [2024-12-15 10:45:19.144826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:30.276  [2024-12-15 10:45:19.242126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:31.654   10:45:20	-- accel/accel.sh@18 -- # out='
00:07:31.654  SPDK Configuration:
00:07:31.654  Core mask:      0x1
00:07:31.654  
00:07:31.654  Accel Perf Configuration:
00:07:31.654  Workload Type:  copy_crc32c
00:07:31.654  CRC-32C seed:   0
00:07:31.654  Vector size:    4096 bytes
00:07:31.654  Transfer size:  8192 bytes
00:07:31.654  Vector count    2
00:07:31.654  Module:         software
00:07:31.654  Queue depth:    32
00:07:31.654  Allocate depth: 32
00:07:31.654  # threads/core: 1
00:07:31.654  Run time:       1 seconds
00:07:31.654  Verify:         Yes
00:07:31.654  
00:07:31.654  Running for 1 seconds...
00:07:31.654  
00:07:31.654  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:31.654  ------------------------------------------------------------------------------------
00:07:31.654  0,0                      153760/s       1201 MiB/s                0                0
00:07:31.654  ====================================================================================
00:07:31.654  Total                    153760/s        600 MiB/s                0                0'
00:07:31.654   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.654   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.654    10:45:20	-- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2
00:07:31.654     10:45:20	-- accel/accel.sh@12 -- # build_accel_config
00:07:31.654    10:45:20	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2
00:07:31.654     10:45:20	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:31.654     10:45:20	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:31.654     10:45:20	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:31.654     10:45:20	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:31.654     10:45:20	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:31.654     10:45:20	-- accel/accel.sh@41 -- # local IFS=,
00:07:31.654     10:45:20	-- accel/accel.sh@42 -- # jq -r .
00:07:31.654  [2024-12-15 10:45:20.502458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:31.654  [2024-12-15 10:45:20.502511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093086 ]
00:07:31.654  EAL: No free 2048 kB hugepages reported on node 1
00:07:31.654  [2024-12-15 10:45:20.586532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:31.913  [2024-12-15 10:45:20.684227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=0x1
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=copy_crc32c
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@24 -- # accel_opc=copy_crc32c
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=0
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val='8192 bytes'
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=software
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@23 -- # accel_module=software
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=32
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=32
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=1
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=Yes
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:31.913   10:45:20	-- accel/accel.sh@21 -- # val=
00:07:31.913   10:45:20	-- accel/accel.sh@22 -- # case "$var" in
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # IFS=:
00:07:31.913   10:45:20	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@21 -- # val=
00:07:33.293   10:45:21	-- accel/accel.sh@22 -- # case "$var" in
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # IFS=:
00:07:33.293   10:45:21	-- accel/accel.sh@20 -- # read -r var val
00:07:33.293   10:45:21	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:33.293   10:45:21	-- accel/accel.sh@28 -- # [[ -n copy_crc32c ]]
00:07:33.293   10:45:21	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:33.293  
00:07:33.293  real	0m2.917s
00:07:33.293  user	0m2.626s
00:07:33.293  sys	0m0.299s
00:07:33.293   10:45:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:33.293   10:45:21	-- common/autotest_common.sh@10 -- # set +x
00:07:33.293  ************************************
00:07:33.293  END TEST accel_copy_crc32c_C2
00:07:33.293  ************************************
00:07:33.293   10:45:21	-- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y
00:07:33.293   10:45:21	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:33.293   10:45:21	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:33.293   10:45:21	-- common/autotest_common.sh@10 -- # set +x
00:07:33.293  ************************************
00:07:33.293  START TEST accel_dualcast
00:07:33.293  ************************************
00:07:33.293   10:45:21	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y
00:07:33.293   10:45:21	-- accel/accel.sh@16 -- # local accel_opc
00:07:33.293   10:45:21	-- accel/accel.sh@17 -- # local accel_module
00:07:33.293    10:45:21	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y
00:07:33.293    10:45:21	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:07:33.293     10:45:21	-- accel/accel.sh@12 -- # build_accel_config
00:07:33.293     10:45:21	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:33.293     10:45:21	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:33.293     10:45:21	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:33.293     10:45:21	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:33.293     10:45:21	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:33.293     10:45:21	-- accel/accel.sh@41 -- # local IFS=,
00:07:33.293     10:45:21	-- accel/accel.sh@42 -- # jq -r .
00:07:33.293  [2024-12-15 10:45:22.007363] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:33.293  [2024-12-15 10:45:22.007431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093342 ]
00:07:33.293  EAL: No free 2048 kB hugepages reported on node 1
00:07:33.293  [2024-12-15 10:45:22.100465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:33.293  [2024-12-15 10:45:22.202876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:34.672   10:45:23	-- accel/accel.sh@18 -- # out='
00:07:34.672  SPDK Configuration:
00:07:34.672  Core mask:      0x1
00:07:34.672  
00:07:34.672  Accel Perf Configuration:
00:07:34.672  Workload Type:  dualcast
00:07:34.672  Transfer size:  4096 bytes
00:07:34.672  Vector count    1
00:07:34.672  Module:         software
00:07:34.672  Queue depth:    32
00:07:34.672  Allocate depth: 32
00:07:34.672  # threads/core: 1
00:07:34.672  Run time:       1 seconds
00:07:34.672  Verify:         Yes
00:07:34.672  
00:07:34.672  Running for 1 seconds...
00:07:34.672  
00:07:34.672  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:34.672  ------------------------------------------------------------------------------------
00:07:34.672  0,0                      326208/s       1274 MiB/s                0                0
00:07:34.672  ====================================================================================
00:07:34.672  Total                    326208/s       1274 MiB/s                0                0'
00:07:34.672   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.672   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.672    10:45:23	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y
00:07:34.672     10:45:23	-- accel/accel.sh@12 -- # build_accel_config
00:07:34.672    10:45:23	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y
00:07:34.672     10:45:23	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:34.672     10:45:23	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:34.672     10:45:23	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:34.672     10:45:23	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:34.672     10:45:23	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:34.672     10:45:23	-- accel/accel.sh@41 -- # local IFS=,
00:07:34.672     10:45:23	-- accel/accel.sh@42 -- # jq -r .
00:07:34.672  [2024-12-15 10:45:23.472441] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:34.672  [2024-12-15 10:45:23.472509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093528 ]
00:07:34.672  EAL: No free 2048 kB hugepages reported on node 1
00:07:34.672  [2024-12-15 10:45:23.577093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:34.672  [2024-12-15 10:45:23.674918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:34.931   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.931   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.931   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.931   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.931   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.931   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.931   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=0x1
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=dualcast
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@24 -- # accel_opc=dualcast
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=software
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@23 -- # accel_module=software
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=32
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=32
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=1
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=Yes
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:34.932   10:45:23	-- accel/accel.sh@21 -- # val=
00:07:34.932   10:45:23	-- accel/accel.sh@22 -- # case "$var" in
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # IFS=:
00:07:34.932   10:45:23	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@21 -- # val=
00:07:36.311   10:45:24	-- accel/accel.sh@22 -- # case "$var" in
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # IFS=:
00:07:36.311   10:45:24	-- accel/accel.sh@20 -- # read -r var val
00:07:36.311   10:45:24	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:36.311   10:45:24	-- accel/accel.sh@28 -- # [[ -n dualcast ]]
00:07:36.311   10:45:24	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:36.311  
00:07:36.311  real	0m2.946s
00:07:36.311  user	0m2.622s
00:07:36.311  sys	0m0.328s
00:07:36.311   10:45:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:36.311   10:45:24	-- common/autotest_common.sh@10 -- # set +x
00:07:36.311  ************************************
00:07:36.311  END TEST accel_dualcast
00:07:36.311  ************************************
00:07:36.311   10:45:24	-- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y
00:07:36.311   10:45:24	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:36.311   10:45:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:36.311   10:45:24	-- common/autotest_common.sh@10 -- # set +x
00:07:36.311  ************************************
00:07:36.311  START TEST accel_compare
00:07:36.311  ************************************
00:07:36.311   10:45:24	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y
00:07:36.311   10:45:24	-- accel/accel.sh@16 -- # local accel_opc
00:07:36.311   10:45:24	-- accel/accel.sh@17 -- # local accel_module
00:07:36.311    10:45:24	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y
00:07:36.311    10:45:24	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:07:36.311     10:45:24	-- accel/accel.sh@12 -- # build_accel_config
00:07:36.311     10:45:24	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:36.311     10:45:24	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:36.311     10:45:24	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:36.311     10:45:24	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:36.311     10:45:24	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:36.311     10:45:24	-- accel/accel.sh@41 -- # local IFS=,
00:07:36.311     10:45:24	-- accel/accel.sh@42 -- # jq -r .
00:07:36.311  [2024-12-15 10:45:24.993690] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:36.311  [2024-12-15 10:45:24.993756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093722 ]
00:07:36.311  EAL: No free 2048 kB hugepages reported on node 1
00:07:36.311  [2024-12-15 10:45:25.100395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:36.311  [2024-12-15 10:45:25.197276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.691   10:45:26	-- accel/accel.sh@18 -- # out='
00:07:37.691  SPDK Configuration:
00:07:37.691  Core mask:      0x1
00:07:37.691  
00:07:37.691  Accel Perf Configuration:
00:07:37.691  Workload Type:  compare
00:07:37.691  Transfer size:  4096 bytes
00:07:37.691  Vector count    1
00:07:37.691  Module:         software
00:07:37.691  Queue depth:    32
00:07:37.691  Allocate depth: 32
00:07:37.691  # threads/core: 1
00:07:37.691  Run time:       1 seconds
00:07:37.691  Verify:         Yes
00:07:37.691  
00:07:37.691  Running for 1 seconds...
00:07:37.691  
00:07:37.691  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:37.691  ------------------------------------------------------------------------------------
00:07:37.691  0,0                      396992/s       1550 MiB/s                0                0
00:07:37.691  ====================================================================================
00:07:37.691  Total                    396992/s       1550 MiB/s                0                0'
00:07:37.691   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.691   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.691    10:45:26	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y
00:07:37.691     10:45:26	-- accel/accel.sh@12 -- # build_accel_config
00:07:37.691    10:45:26	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y
00:07:37.691     10:45:26	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:37.691     10:45:26	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:37.691     10:45:26	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:37.691     10:45:26	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:37.691     10:45:26	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:37.691     10:45:26	-- accel/accel.sh@41 -- # local IFS=,
00:07:37.691     10:45:26	-- accel/accel.sh@42 -- # jq -r .
00:07:37.691  [2024-12-15 10:45:26.470634] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:37.691  [2024-12-15 10:45:26.470709] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093905 ]
00:07:37.691  EAL: No free 2048 kB hugepages reported on node 1
00:07:37.691  [2024-12-15 10:45:26.575387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:37.692  [2024-12-15 10:45:26.671910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=0x1
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=compare
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@24 -- # accel_opc=compare
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=software
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@23 -- # accel_module=software
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=32
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=32
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=1
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=Yes
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:37.951   10:45:26	-- accel/accel.sh@21 -- # val=
00:07:37.951   10:45:26	-- accel/accel.sh@22 -- # case "$var" in
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # IFS=:
00:07:37.951   10:45:26	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@21 -- # val=
00:07:39.330   10:45:27	-- accel/accel.sh@22 -- # case "$var" in
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # IFS=:
00:07:39.330   10:45:27	-- accel/accel.sh@20 -- # read -r var val
00:07:39.330   10:45:27	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:39.330   10:45:27	-- accel/accel.sh@28 -- # [[ -n compare ]]
00:07:39.330   10:45:27	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:39.330  
00:07:39.330  real	0m2.955s
00:07:39.330  user	0m2.614s
00:07:39.330  sys	0m0.345s
00:07:39.330   10:45:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:39.330   10:45:27	-- common/autotest_common.sh@10 -- # set +x
00:07:39.330  ************************************
00:07:39.330  END TEST accel_compare
00:07:39.330  ************************************
00:07:39.330   10:45:27	-- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y
00:07:39.330   10:45:27	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:07:39.330   10:45:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:39.330   10:45:27	-- common/autotest_common.sh@10 -- # set +x
00:07:39.330  ************************************
00:07:39.330  START TEST accel_xor
00:07:39.330  ************************************
00:07:39.330   10:45:27	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y
00:07:39.330   10:45:27	-- accel/accel.sh@16 -- # local accel_opc
00:07:39.330   10:45:27	-- accel/accel.sh@17 -- # local accel_module
00:07:39.330    10:45:27	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y
00:07:39.330    10:45:27	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:07:39.330     10:45:27	-- accel/accel.sh@12 -- # build_accel_config
00:07:39.330     10:45:27	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:39.331     10:45:27	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:39.331     10:45:27	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:39.331     10:45:27	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:39.331     10:45:27	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:39.331     10:45:27	-- accel/accel.sh@41 -- # local IFS=,
00:07:39.331     10:45:27	-- accel/accel.sh@42 -- # jq -r .
00:07:39.331  [2024-12-15 10:45:27.975613] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:39.331  [2024-12-15 10:45:27.975671] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094104 ]
00:07:39.331  EAL: No free 2048 kB hugepages reported on node 1
00:07:39.331  [2024-12-15 10:45:28.066660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:39.331  [2024-12-15 10:45:28.163961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:40.708   10:45:29	-- accel/accel.sh@18 -- # out='
00:07:40.708  SPDK Configuration:
00:07:40.708  Core mask:      0x1
00:07:40.708  
00:07:40.708  Accel Perf Configuration:
00:07:40.708  Workload Type:  xor
00:07:40.708  Source buffers: 2
00:07:40.708  Transfer size:  4096 bytes
00:07:40.708  Vector count    1
00:07:40.708  Module:         software
00:07:40.708  Queue depth:    32
00:07:40.708  Allocate depth: 32
00:07:40.708  # threads/core: 1
00:07:40.708  Run time:       1 seconds
00:07:40.708  Verify:         Yes
00:07:40.708  
00:07:40.708  Running for 1 seconds...
00:07:40.708  
00:07:40.708  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:40.708  ------------------------------------------------------------------------------------
00:07:40.708  0,0                      323968/s       1265 MiB/s                0                0
00:07:40.708  ====================================================================================
00:07:40.708  Total                    323968/s       1265 MiB/s                0                0'
00:07:40.708    10:45:29	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y
00:07:40.708   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.708   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.708    10:45:29	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y
00:07:40.708     10:45:29	-- accel/accel.sh@12 -- # build_accel_config
00:07:40.708     10:45:29	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:40.709     10:45:29	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:40.709     10:45:29	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:40.709     10:45:29	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:40.709     10:45:29	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:40.709     10:45:29	-- accel/accel.sh@41 -- # local IFS=,
00:07:40.709     10:45:29	-- accel/accel.sh@42 -- # jq -r .
00:07:40.709  [2024-12-15 10:45:29.424372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:40.709  [2024-12-15 10:45:29.424440] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094287 ]
00:07:40.709  EAL: No free 2048 kB hugepages reported on node 1
00:07:40.709  [2024-12-15 10:45:29.516571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:40.709  [2024-12-15 10:45:29.613663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=0x1
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=xor
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@24 -- # accel_opc=xor
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=2
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=software
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@23 -- # accel_module=software
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=32
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=32
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=1
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=Yes
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:40.709   10:45:29	-- accel/accel.sh@21 -- # val=
00:07:40.709   10:45:29	-- accel/accel.sh@22 -- # case "$var" in
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # IFS=:
00:07:40.709   10:45:29	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@21 -- # val=
00:07:42.087   10:45:30	-- accel/accel.sh@22 -- # case "$var" in
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # IFS=:
00:07:42.087   10:45:30	-- accel/accel.sh@20 -- # read -r var val
00:07:42.087   10:45:30	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:42.087   10:45:30	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:07:42.087   10:45:30	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:42.087  
00:07:42.087  real	0m2.901s
00:07:42.087  user	0m2.606s
00:07:42.087  sys	0m0.300s
00:07:42.087   10:45:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:42.087   10:45:30	-- common/autotest_common.sh@10 -- # set +x
00:07:42.087  ************************************
00:07:42.087  END TEST accel_xor
00:07:42.087  ************************************
00:07:42.087   10:45:30	-- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3
00:07:42.087   10:45:30	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:07:42.087   10:45:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:42.087   10:45:30	-- common/autotest_common.sh@10 -- # set +x
00:07:42.087  ************************************
00:07:42.087  START TEST accel_xor
00:07:42.087  ************************************
00:07:42.087   10:45:30	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3
00:07:42.087   10:45:30	-- accel/accel.sh@16 -- # local accel_opc
00:07:42.087   10:45:30	-- accel/accel.sh@17 -- # local accel_module
00:07:42.087    10:45:30	-- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3
00:07:42.087    10:45:30	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:07:42.087     10:45:30	-- accel/accel.sh@12 -- # build_accel_config
00:07:42.087     10:45:30	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:42.087     10:45:30	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:42.087     10:45:30	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:42.087     10:45:30	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:42.087     10:45:30	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:42.087     10:45:30	-- accel/accel.sh@41 -- # local IFS=,
00:07:42.087     10:45:30	-- accel/accel.sh@42 -- # jq -r .
00:07:42.087  [2024-12-15 10:45:30.927417] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:42.087  [2024-12-15 10:45:30.927486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094486 ]
00:07:42.087  EAL: No free 2048 kB hugepages reported on node 1
00:07:42.087  [2024-12-15 10:45:31.033061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:42.346  [2024-12-15 10:45:31.128557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:43.724   10:45:32	-- accel/accel.sh@18 -- # out='
00:07:43.724  SPDK Configuration:
00:07:43.724  Core mask:      0x1
00:07:43.724  
00:07:43.724  Accel Perf Configuration:
00:07:43.724  Workload Type:  xor
00:07:43.724  Source buffers: 3
00:07:43.724  Transfer size:  4096 bytes
00:07:43.724  Vector count    1
00:07:43.724  Module:         software
00:07:43.724  Queue depth:    32
00:07:43.724  Allocate depth: 32
00:07:43.724  # threads/core: 1
00:07:43.724  Run time:       1 seconds
00:07:43.724  Verify:         Yes
00:07:43.724  
00:07:43.724  Running for 1 seconds...
00:07:43.724  
00:07:43.724  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:43.724  ------------------------------------------------------------------------------------
00:07:43.724  0,0                      305600/s       1193 MiB/s                0                0
00:07:43.724  ====================================================================================
00:07:43.724  Total                    305600/s       1193 MiB/s                0                0'
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724    10:45:32	-- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3
00:07:43.724     10:45:32	-- accel/accel.sh@12 -- # build_accel_config
00:07:43.724    10:45:32	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3
00:07:43.724     10:45:32	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:43.724     10:45:32	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:43.724     10:45:32	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:43.724     10:45:32	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:43.724     10:45:32	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:43.724     10:45:32	-- accel/accel.sh@41 -- # local IFS=,
00:07:43.724     10:45:32	-- accel/accel.sh@42 -- # jq -r .
00:07:43.724  [2024-12-15 10:45:32.390896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:43.724  [2024-12-15 10:45:32.390972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094675 ]
00:07:43.724  EAL: No free 2048 kB hugepages reported on node 1
00:07:43.724  [2024-12-15 10:45:32.494980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:43.724  [2024-12-15 10:45:32.588661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=0x1
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=xor
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@24 -- # accel_opc=xor
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=3
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=software
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@23 -- # accel_module=software
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=32
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=32
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.724   10:45:32	-- accel/accel.sh@21 -- # val=1
00:07:43.724   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.724   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.725   10:45:32	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:43.725   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.725   10:45:32	-- accel/accel.sh@21 -- # val=Yes
00:07:43.725   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.725   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.725   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:43.725   10:45:32	-- accel/accel.sh@21 -- # val=
00:07:43.725   10:45:32	-- accel/accel.sh@22 -- # case "$var" in
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # IFS=:
00:07:43.725   10:45:32	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@21 -- # val=
00:07:45.103   10:45:33	-- accel/accel.sh@22 -- # case "$var" in
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # IFS=:
00:07:45.103   10:45:33	-- accel/accel.sh@20 -- # read -r var val
00:07:45.103   10:45:33	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:45.103   10:45:33	-- accel/accel.sh@28 -- # [[ -n xor ]]
00:07:45.103   10:45:33	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:45.103  
00:07:45.103  real	0m2.911s
00:07:45.103  user	0m2.611s
00:07:45.103  sys	0m0.304s
00:07:45.103   10:45:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:45.103   10:45:33	-- common/autotest_common.sh@10 -- # set +x
00:07:45.103  ************************************
00:07:45.103  END TEST accel_xor
00:07:45.103  ************************************
00:07:45.103   10:45:33	-- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify
00:07:45.103   10:45:33	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:07:45.103   10:45:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:45.103   10:45:33	-- common/autotest_common.sh@10 -- # set +x
00:07:45.103  ************************************
00:07:45.103  START TEST accel_dif_verify
00:07:45.103  ************************************
00:07:45.103   10:45:33	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify
00:07:45.103   10:45:33	-- accel/accel.sh@16 -- # local accel_opc
00:07:45.103   10:45:33	-- accel/accel.sh@17 -- # local accel_module
00:07:45.103    10:45:33	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify
00:07:45.103    10:45:33	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:07:45.103     10:45:33	-- accel/accel.sh@12 -- # build_accel_config
00:07:45.103     10:45:33	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:45.103     10:45:33	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:45.103     10:45:33	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:45.103     10:45:33	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:45.103     10:45:33	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:45.103     10:45:33	-- accel/accel.sh@41 -- # local IFS=,
00:07:45.103     10:45:33	-- accel/accel.sh@42 -- # jq -r .
00:07:45.103  [2024-12-15 10:45:33.897003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:45.103  [2024-12-15 10:45:33.897075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094961 ]
00:07:45.103  EAL: No free 2048 kB hugepages reported on node 1
00:07:45.103  [2024-12-15 10:45:34.000131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:45.103  [2024-12-15 10:45:34.096047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.482   10:45:35	-- accel/accel.sh@18 -- # out='
00:07:46.482  SPDK Configuration:
00:07:46.482  Core mask:      0x1
00:07:46.482  
00:07:46.482  Accel Perf Configuration:
00:07:46.482  Workload Type:  dif_verify
00:07:46.482  Vector size:    4096 bytes
00:07:46.482  Transfer size:  4096 bytes
00:07:46.482  Block size:     512 bytes
00:07:46.482  Metadata size:  8 bytes
00:07:46.482  Vector count    1
00:07:46.482  Module:         software
00:07:46.482  Queue depth:    32
00:07:46.482  Allocate depth: 32
00:07:46.482  # threads/core: 1
00:07:46.482  Run time:       1 seconds
00:07:46.482  Verify:         No
00:07:46.482  
00:07:46.482  Running for 1 seconds...
00:07:46.482  
00:07:46.482  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:46.482  ------------------------------------------------------------------------------------
00:07:46.482  0,0                       84768/s        336 MiB/s                0                0
00:07:46.482  ====================================================================================
00:07:46.482  Total                     84768/s        331 MiB/s                0                0'
00:07:46.482    10:45:35	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify
00:07:46.482   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.482   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.482    10:45:35	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify
00:07:46.482     10:45:35	-- accel/accel.sh@12 -- # build_accel_config
00:07:46.482     10:45:35	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:46.482     10:45:35	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:46.482     10:45:35	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:46.482     10:45:35	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:46.482     10:45:35	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:46.482     10:45:35	-- accel/accel.sh@41 -- # local IFS=,
00:07:46.482     10:45:35	-- accel/accel.sh@42 -- # jq -r .
00:07:46.482  [2024-12-15 10:45:35.350892] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:46.482  [2024-12-15 10:45:35.350960] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095214 ]
00:07:46.482  EAL: No free 2048 kB hugepages reported on node 1
00:07:46.482  [2024-12-15 10:45:35.443106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:46.741  [2024-12-15 10:45:35.540185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=0x1
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=dif_verify
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@24 -- # accel_opc=dif_verify
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val='512 bytes'
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val='8 bytes'
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=software
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@23 -- # accel_module=software
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=32
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=32
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=1
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=No
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:46.741   10:45:35	-- accel/accel.sh@21 -- # val=
00:07:46.741   10:45:35	-- accel/accel.sh@22 -- # case "$var" in
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # IFS=:
00:07:46.741   10:45:35	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@21 -- # val=
00:07:48.121   10:45:36	-- accel/accel.sh@22 -- # case "$var" in
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # IFS=:
00:07:48.121   10:45:36	-- accel/accel.sh@20 -- # read -r var val
00:07:48.121   10:45:36	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:48.121   10:45:36	-- accel/accel.sh@28 -- # [[ -n dif_verify ]]
00:07:48.121   10:45:36	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:48.121  
00:07:48.121  real	0m2.922s
00:07:48.121  user	0m2.606s
00:07:48.121  sys	0m0.323s
00:07:48.121   10:45:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:48.121   10:45:36	-- common/autotest_common.sh@10 -- # set +x
00:07:48.121  ************************************
00:07:48.121  END TEST accel_dif_verify
00:07:48.121  ************************************
00:07:48.121   10:45:36	-- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate
00:07:48.121   10:45:36	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:07:48.121   10:45:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:48.121   10:45:36	-- common/autotest_common.sh@10 -- # set +x
00:07:48.121  ************************************
00:07:48.121  START TEST accel_dif_generate
00:07:48.121  ************************************
00:07:48.121   10:45:36	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate
00:07:48.121   10:45:36	-- accel/accel.sh@16 -- # local accel_opc
00:07:48.121   10:45:36	-- accel/accel.sh@17 -- # local accel_module
00:07:48.121    10:45:36	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate
00:07:48.121    10:45:36	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:07:48.121     10:45:36	-- accel/accel.sh@12 -- # build_accel_config
00:07:48.121     10:45:36	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:48.121     10:45:36	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:48.121     10:45:36	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:48.121     10:45:36	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:48.121     10:45:36	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:48.121     10:45:36	-- accel/accel.sh@41 -- # local IFS=,
00:07:48.121     10:45:36	-- accel/accel.sh@42 -- # jq -r .
00:07:48.121  [2024-12-15 10:45:36.865240] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:48.121  [2024-12-15 10:45:36.865307] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095413 ]
00:07:48.121  EAL: No free 2048 kB hugepages reported on node 1
00:07:48.121  [2024-12-15 10:45:36.969965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:48.121  [2024-12-15 10:45:37.067100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:49.501   10:45:38	-- accel/accel.sh@18 -- # out='
00:07:49.501  SPDK Configuration:
00:07:49.501  Core mask:      0x1
00:07:49.501  
00:07:49.501  Accel Perf Configuration:
00:07:49.501  Workload Type:  dif_generate
00:07:49.501  Vector size:    4096 bytes
00:07:49.501  Transfer size:  4096 bytes
00:07:49.501  Block size:     512 bytes
00:07:49.501  Metadata size:  8 bytes
00:07:49.501  Vector count    1
00:07:49.501  Module:         software
00:07:49.501  Queue depth:    32
00:07:49.501  Allocate depth: 32
00:07:49.501  # threads/core: 1
00:07:49.501  Run time:       1 seconds
00:07:49.501  Verify:         No
00:07:49.501  
00:07:49.501  Running for 1 seconds...
00:07:49.501  
00:07:49.501  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:49.501  ------------------------------------------------------------------------------------
00:07:49.501  0,0                      102048/s        404 MiB/s                0                0
00:07:49.501  ====================================================================================
00:07:49.501  Total                    102048/s        398 MiB/s                0                0'
00:07:49.501   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.501   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.501    10:45:38	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate
00:07:49.501     10:45:38	-- accel/accel.sh@12 -- # build_accel_config
00:07:49.501    10:45:38	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate
00:07:49.501     10:45:38	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:49.501     10:45:38	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:49.501     10:45:38	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:49.501     10:45:38	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:49.501     10:45:38	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:49.501     10:45:38	-- accel/accel.sh@41 -- # local IFS=,
00:07:49.501     10:45:38	-- accel/accel.sh@42 -- # jq -r .
00:07:49.501  [2024-12-15 10:45:38.338779] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:49.501  [2024-12-15 10:45:38.338848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095596 ]
00:07:49.501  EAL: No free 2048 kB hugepages reported on node 1
00:07:49.501  [2024-12-15 10:45:38.443635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:49.761  [2024-12-15 10:45:38.540359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=0x1
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=dif_generate
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@24 -- # accel_opc=dif_generate
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val='512 bytes'
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val='8 bytes'
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=software
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@23 -- # accel_module=software
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=32
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=32
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=1
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=No
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:49.761   10:45:38	-- accel/accel.sh@21 -- # val=
00:07:49.761   10:45:38	-- accel/accel.sh@22 -- # case "$var" in
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # IFS=:
00:07:49.761   10:45:38	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@21 -- # val=
00:07:50.802   10:45:39	-- accel/accel.sh@22 -- # case "$var" in
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # IFS=:
00:07:50.802   10:45:39	-- accel/accel.sh@20 -- # read -r var val
00:07:50.802   10:45:39	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:50.802   10:45:39	-- accel/accel.sh@28 -- # [[ -n dif_generate ]]
00:07:50.802   10:45:39	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:50.802  
00:07:50.802  real	0m2.953s
00:07:50.802  user	0m2.610s
00:07:50.802  sys	0m0.349s
00:07:50.802   10:45:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:50.802   10:45:39	-- common/autotest_common.sh@10 -- # set +x
00:07:50.802  ************************************
00:07:50.802  END TEST accel_dif_generate
00:07:50.802  ************************************
00:07:51.169   10:45:39	-- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy
00:07:51.169   10:45:39	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:07:51.169   10:45:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:51.169   10:45:39	-- common/autotest_common.sh@10 -- # set +x
00:07:51.169  ************************************
00:07:51.169  START TEST accel_dif_generate_copy
00:07:51.169  ************************************
00:07:51.169   10:45:39	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy
00:07:51.169   10:45:39	-- accel/accel.sh@16 -- # local accel_opc
00:07:51.169   10:45:39	-- accel/accel.sh@17 -- # local accel_module
00:07:51.169    10:45:39	-- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy
00:07:51.169    10:45:39	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:07:51.169     10:45:39	-- accel/accel.sh@12 -- # build_accel_config
00:07:51.169     10:45:39	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:51.169     10:45:39	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:51.169     10:45:39	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:51.169     10:45:39	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:51.169     10:45:39	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:51.169     10:45:39	-- accel/accel.sh@41 -- # local IFS=,
00:07:51.169     10:45:39	-- accel/accel.sh@42 -- # jq -r .
00:07:51.169  [2024-12-15 10:45:39.863423] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:51.169  [2024-12-15 10:45:39.863489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095791 ]
00:07:51.169  EAL: No free 2048 kB hugepages reported on node 1
00:07:51.169  [2024-12-15 10:45:39.969120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:51.169  [2024-12-15 10:45:40.077146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:52.547   10:45:41	-- accel/accel.sh@18 -- # out='
00:07:52.547  SPDK Configuration:
00:07:52.547  Core mask:      0x1
00:07:52.547  
00:07:52.547  Accel Perf Configuration:
00:07:52.547  Workload Type:  dif_generate_copy
00:07:52.547  Vector size:    4096 bytes
00:07:52.547  Transfer size:  4096 bytes
00:07:52.547  Vector count    1
00:07:52.547  Module:         software
00:07:52.547  Queue depth:    32
00:07:52.547  Allocate depth: 32
00:07:52.547  # threads/core: 1
00:07:52.547  Run time:       1 seconds
00:07:52.547  Verify:         No
00:07:52.547  
00:07:52.547  Running for 1 seconds...
00:07:52.547  
00:07:52.547  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:52.547  ------------------------------------------------------------------------------------
00:07:52.547  0,0                       78976/s        313 MiB/s                0                0
00:07:52.547  ====================================================================================
00:07:52.547  Total                     78976/s        308 MiB/s                0                0'
00:07:52.547    10:45:41	-- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy
00:07:52.547   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.547   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.547    10:45:41	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy
00:07:52.547     10:45:41	-- accel/accel.sh@12 -- # build_accel_config
00:07:52.547     10:45:41	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:52.547     10:45:41	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:52.547     10:45:41	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:52.547     10:45:41	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:52.547     10:45:41	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:52.547     10:45:41	-- accel/accel.sh@41 -- # local IFS=,
00:07:52.547     10:45:41	-- accel/accel.sh@42 -- # jq -r .
00:07:52.547  [2024-12-15 10:45:41.336843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:52.547  [2024-12-15 10:45:41.336910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095985 ]
00:07:52.547  EAL: No free 2048 kB hugepages reported on node 1
00:07:52.547  [2024-12-15 10:45:41.429574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:52.547  [2024-12-15 10:45:41.524642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=0x1
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=dif_generate_copy
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@24 -- # accel_opc=dif_generate_copy
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=software
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@23 -- # accel_module=software
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=32
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=32
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=1
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=No
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:52.807   10:45:41	-- accel/accel.sh@21 -- # val=
00:07:52.807   10:45:41	-- accel/accel.sh@22 -- # case "$var" in
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # IFS=:
00:07:52.807   10:45:41	-- accel/accel.sh@20 -- # read -r var val
00:07:53.745   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:53.745   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:53.745   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:53.745   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:53.745   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:53.745   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:53.745   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:53.745   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:53.745   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:53.745   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:53.745   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:54.005   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:54.005   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:54.005   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:54.005   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:54.005   10:45:42	-- accel/accel.sh@21 -- # val=
00:07:54.005   10:45:42	-- accel/accel.sh@22 -- # case "$var" in
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # IFS=:
00:07:54.005   10:45:42	-- accel/accel.sh@20 -- # read -r var val
00:07:54.005   10:45:42	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:54.005   10:45:42	-- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]]
00:07:54.005   10:45:42	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:54.005  
00:07:54.005  real	0m2.930s
00:07:54.005  user	0m2.621s
00:07:54.005  sys	0m0.313s
00:07:54.005   10:45:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:54.005   10:45:42	-- common/autotest_common.sh@10 -- # set +x
00:07:54.005  ************************************
00:07:54.005  END TEST accel_dif_generate_copy
00:07:54.005  ************************************
00:07:54.005   10:45:42	-- accel/accel.sh@107 -- # [[ y == y ]]
00:07:54.005   10:45:42	-- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:54.005   10:45:42	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:07:54.005   10:45:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:54.005   10:45:42	-- common/autotest_common.sh@10 -- # set +x
00:07:54.005  ************************************
00:07:54.005  START TEST accel_comp
00:07:54.005  ************************************
00:07:54.005   10:45:42	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:54.005   10:45:42	-- accel/accel.sh@16 -- # local accel_opc
00:07:54.005   10:45:42	-- accel/accel.sh@17 -- # local accel_module
00:07:54.005    10:45:42	-- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:54.005    10:45:42	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:54.005     10:45:42	-- accel/accel.sh@12 -- # build_accel_config
00:07:54.005     10:45:42	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:54.005     10:45:42	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:54.005     10:45:42	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:54.005     10:45:42	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:54.005     10:45:42	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:54.005     10:45:42	-- accel/accel.sh@41 -- # local IFS=,
00:07:54.005     10:45:42	-- accel/accel.sh@42 -- # jq -r .
00:07:54.005  [2024-12-15 10:45:42.849021] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:54.005  [2024-12-15 10:45:42.849152] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096182 ]
00:07:54.005  EAL: No free 2048 kB hugepages reported on node 1
00:07:54.005  [2024-12-15 10:45:43.009078] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:54.265  [2024-12-15 10:45:43.108377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.644   10:45:44	-- accel/accel.sh@18 -- # out='Preparing input file...
00:07:55.644  
00:07:55.644  SPDK Configuration:
00:07:55.644  Core mask:      0x1
00:07:55.644  
00:07:55.644  Accel Perf Configuration:
00:07:55.644  Workload Type:  compress
00:07:55.644  Transfer size:  4096 bytes
00:07:55.644  Vector count    1
00:07:55.644  Module:         software
00:07:55.644  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:55.644  Queue depth:    32
00:07:55.644  Allocate depth: 32
00:07:55.644  # threads/core: 1
00:07:55.644  Run time:       1 seconds
00:07:55.644  Verify:         No
00:07:55.644  
00:07:55.644  Running for 1 seconds...
00:07:55.644  
00:07:55.644  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:55.644  ------------------------------------------------------------------------------------
00:07:55.644  0,0                       42496/s        177 MiB/s                0                0
00:07:55.644  ====================================================================================
00:07:55.644  Total                     42496/s        166 MiB/s                0                0'
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644    10:45:44	-- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:55.644     10:45:44	-- accel/accel.sh@12 -- # build_accel_config
00:07:55.644    10:45:44	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:55.644     10:45:44	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:55.644     10:45:44	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:55.644     10:45:44	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:55.644     10:45:44	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:55.644     10:45:44	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:55.644     10:45:44	-- accel/accel.sh@41 -- # local IFS=,
00:07:55.644     10:45:44	-- accel/accel.sh@42 -- # jq -r .
00:07:55.644  [2024-12-15 10:45:44.369053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:55.644  [2024-12-15 10:45:44.369139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096369 ]
00:07:55.644  EAL: No free 2048 kB hugepages reported on node 1
00:07:55.644  [2024-12-15 10:45:44.471821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:55.644  [2024-12-15 10:45:44.569693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=0x1
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=compress
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@24 -- # accel_opc=compress
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=software
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@23 -- # accel_module=software
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=32
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=32
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=1
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=No
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:55.644   10:45:44	-- accel/accel.sh@21 -- # val=
00:07:55.644   10:45:44	-- accel/accel.sh@22 -- # case "$var" in
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # IFS=:
00:07:55.644   10:45:44	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@21 -- # val=
00:07:57.025   10:45:45	-- accel/accel.sh@22 -- # case "$var" in
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # IFS=:
00:07:57.025   10:45:45	-- accel/accel.sh@20 -- # read -r var val
00:07:57.025   10:45:45	-- accel/accel.sh@28 -- # [[ -n software ]]
00:07:57.025   10:45:45	-- accel/accel.sh@28 -- # [[ -n compress ]]
00:07:57.025   10:45:45	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:07:57.025  
00:07:57.025  real	0m3.005s
00:07:57.025  user	0m2.626s
00:07:57.025  sys	0m0.384s
00:07:57.025   10:45:45	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:07:57.025   10:45:45	-- common/autotest_common.sh@10 -- # set +x
00:07:57.025  ************************************
00:07:57.025  END TEST accel_comp
00:07:57.025  ************************************
00:07:57.025   10:45:45	-- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:57.025   10:45:45	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:07:57.025   10:45:45	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:07:57.025   10:45:45	-- common/autotest_common.sh@10 -- # set +x
00:07:57.025  ************************************
00:07:57.025  START TEST accel_decomp
00:07:57.025  ************************************
00:07:57.025   10:45:45	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:57.025   10:45:45	-- accel/accel.sh@16 -- # local accel_opc
00:07:57.025   10:45:45	-- accel/accel.sh@17 -- # local accel_module
00:07:57.025    10:45:45	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:57.025    10:45:45	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:57.025     10:45:45	-- accel/accel.sh@12 -- # build_accel_config
00:07:57.025     10:45:45	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:57.025     10:45:45	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:57.025     10:45:45	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:57.025     10:45:45	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:57.025     10:45:45	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:57.025     10:45:45	-- accel/accel.sh@41 -- # local IFS=,
00:07:57.025     10:45:45	-- accel/accel.sh@42 -- # jq -r .
00:07:57.025  [2024-12-15 10:45:45.894420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:57.025  [2024-12-15 10:45:45.894489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096581 ]
00:07:57.025  EAL: No free 2048 kB hugepages reported on node 1
00:07:57.025  [2024-12-15 10:45:45.999369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:57.285  [2024-12-15 10:45:46.101236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:58.665   10:45:47	-- accel/accel.sh@18 -- # out='Preparing input file...
00:07:58.665  
00:07:58.665  SPDK Configuration:
00:07:58.665  Core mask:      0x1
00:07:58.665  
00:07:58.665  Accel Perf Configuration:
00:07:58.665  Workload Type:  decompress
00:07:58.665  Transfer size:  4096 bytes
00:07:58.665  Vector count    1
00:07:58.665  Module:         software
00:07:58.665  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:58.665  Queue depth:    32
00:07:58.665  Allocate depth: 32
00:07:58.665  # threads/core: 1
00:07:58.665  Run time:       1 seconds
00:07:58.665  Verify:         Yes
00:07:58.665  
00:07:58.665  Running for 1 seconds...
00:07:58.665  
00:07:58.665  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:07:58.665  ------------------------------------------------------------------------------------
00:07:58.665  0,0                       56992/s        105 MiB/s                0                0
00:07:58.665  ====================================================================================
00:07:58.665  Total                     56992/s        222 MiB/s                0                0'
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665    10:45:47	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:58.665     10:45:47	-- accel/accel.sh@12 -- # build_accel_config
00:07:58.665    10:45:47	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y
00:07:58.665     10:45:47	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:07:58.665     10:45:47	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:07:58.665     10:45:47	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:07:58.665     10:45:47	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:07:58.665     10:45:47	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:07:58.665     10:45:47	-- accel/accel.sh@41 -- # local IFS=,
00:07:58.665     10:45:47	-- accel/accel.sh@42 -- # jq -r .
00:07:58.665  [2024-12-15 10:45:47.377671] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:07:58.665  [2024-12-15 10:45:47.377741] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2096838 ]
00:07:58.665  EAL: No free 2048 kB hugepages reported on node 1
00:07:58.665  [2024-12-15 10:45:47.482185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:58.665  [2024-12-15 10:45:47.579009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=0x1
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=decompress
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@24 -- # accel_opc=decompress
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val='4096 bytes'
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=software
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@23 -- # accel_module=software
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=32
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.665   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.665   10:45:47	-- accel/accel.sh@21 -- # val=32
00:07:58.665   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.666   10:45:47	-- accel/accel.sh@21 -- # val=1
00:07:58.666   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.666   10:45:47	-- accel/accel.sh@21 -- # val='1 seconds'
00:07:58.666   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.666   10:45:47	-- accel/accel.sh@21 -- # val=Yes
00:07:58.666   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.666   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.666   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:07:58.666   10:45:47	-- accel/accel.sh@21 -- # val=
00:07:58.666   10:45:47	-- accel/accel.sh@22 -- # case "$var" in
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # IFS=:
00:07:58.666   10:45:47	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@21 -- # val=
00:08:00.045   10:45:48	-- accel/accel.sh@22 -- # case "$var" in
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # IFS=:
00:08:00.045   10:45:48	-- accel/accel.sh@20 -- # read -r var val
00:08:00.045   10:45:48	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:00.045   10:45:48	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:00.045   10:45:48	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:00.045  
00:08:00.045  real	0m2.967s
00:08:00.045  user	0m2.634s
00:08:00.045  sys	0m0.337s
00:08:00.045   10:45:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:00.045   10:45:48	-- common/autotest_common.sh@10 -- # set +x
00:08:00.045  ************************************
00:08:00.045  END TEST accel_decomp
00:08:00.045  ************************************
00:08:00.045   10:45:48	-- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:00.045   10:45:48	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:08:00.045   10:45:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:00.045   10:45:48	-- common/autotest_common.sh@10 -- # set +x
00:08:00.045  ************************************
00:08:00.045  START TEST accel_decmop_full
00:08:00.045  ************************************
00:08:00.045   10:45:48	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:00.045   10:45:48	-- accel/accel.sh@16 -- # local accel_opc
00:08:00.045   10:45:48	-- accel/accel.sh@17 -- # local accel_module
00:08:00.045    10:45:48	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:00.045    10:45:48	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:00.045     10:45:48	-- accel/accel.sh@12 -- # build_accel_config
00:08:00.045     10:45:48	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:00.045     10:45:48	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:00.045     10:45:48	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:00.045     10:45:48	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:00.045     10:45:48	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:00.045     10:45:48	-- accel/accel.sh@41 -- # local IFS=,
00:08:00.045     10:45:48	-- accel/accel.sh@42 -- # jq -r .
00:08:00.045  [2024-12-15 10:45:48.906069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:00.045  [2024-12-15 10:45:48.906148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097108 ]
00:08:00.045  EAL: No free 2048 kB hugepages reported on node 1
00:08:00.045  [2024-12-15 10:45:49.010347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:00.304  [2024-12-15 10:45:49.107586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:01.683   10:45:50	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:01.683  
00:08:01.683  SPDK Configuration:
00:08:01.683  Core mask:      0x1
00:08:01.683  
00:08:01.683  Accel Perf Configuration:
00:08:01.683  Workload Type:  decompress
00:08:01.683  Transfer size:  111250 bytes
00:08:01.683  Vector count    1
00:08:01.683  Module:         software
00:08:01.683  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:01.683  Queue depth:    32
00:08:01.683  Allocate depth: 32
00:08:01.683  # threads/core: 1
00:08:01.683  Run time:       1 seconds
00:08:01.683  Verify:         Yes
00:08:01.683  
00:08:01.683  Running for 1 seconds...
00:08:01.683  
00:08:01.683  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:01.683  ------------------------------------------------------------------------------------
00:08:01.683  0,0                        3808/s        157 MiB/s                0                0
00:08:01.683  ====================================================================================
00:08:01.683  Total                      3808/s        404 MiB/s                0                0'
00:08:01.683   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.683   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.683    10:45:50	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:01.683     10:45:50	-- accel/accel.sh@12 -- # build_accel_config
00:08:01.683    10:45:50	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0
00:08:01.683     10:45:50	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:01.683     10:45:50	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:01.683     10:45:50	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:01.684     10:45:50	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:01.684     10:45:50	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:01.684     10:45:50	-- accel/accel.sh@41 -- # local IFS=,
00:08:01.684     10:45:50	-- accel/accel.sh@42 -- # jq -r .
00:08:01.684  [2024-12-15 10:45:50.395563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:01.684  [2024-12-15 10:45:50.395636] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097294 ]
00:08:01.684  EAL: No free 2048 kB hugepages reported on node 1
00:08:01.684  [2024-12-15 10:45:50.501296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:01.684  [2024-12-15 10:45:50.598066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=0x1
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=decompress
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val='111250 bytes'
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=software
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@23 -- # accel_module=software
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=32
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=32
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=1
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=Yes
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:01.684   10:45:50	-- accel/accel.sh@21 -- # val=
00:08:01.684   10:45:50	-- accel/accel.sh@22 -- # case "$var" in
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # IFS=:
00:08:01.684   10:45:50	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@21 -- # val=
00:08:03.063   10:45:51	-- accel/accel.sh@22 -- # case "$var" in
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # IFS=:
00:08:03.063   10:45:51	-- accel/accel.sh@20 -- # read -r var val
00:08:03.063   10:45:51	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:03.063   10:45:51	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:03.063   10:45:51	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:03.063  
00:08:03.063  real	0m2.980s
00:08:03.063  user	0m2.651s
00:08:03.063  sys	0m0.335s
00:08:03.063   10:45:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:03.063   10:45:51	-- common/autotest_common.sh@10 -- # set +x
00:08:03.063  ************************************
00:08:03.063  END TEST accel_decmop_full
00:08:03.063  ************************************
00:08:03.063   10:45:51	-- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:03.063   10:45:51	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:08:03.063   10:45:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:03.063   10:45:51	-- common/autotest_common.sh@10 -- # set +x
00:08:03.063  ************************************
00:08:03.063  START TEST accel_decomp_mcore
00:08:03.063  ************************************
00:08:03.063   10:45:51	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:03.063   10:45:51	-- accel/accel.sh@16 -- # local accel_opc
00:08:03.063   10:45:51	-- accel/accel.sh@17 -- # local accel_module
00:08:03.063    10:45:51	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:03.063     10:45:51	-- accel/accel.sh@12 -- # build_accel_config
00:08:03.063    10:45:51	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:03.063     10:45:51	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:03.064     10:45:51	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:03.064     10:45:51	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:03.064     10:45:51	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:03.064     10:45:51	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:03.064     10:45:51	-- accel/accel.sh@41 -- # local IFS=,
00:08:03.064     10:45:51	-- accel/accel.sh@42 -- # jq -r .
00:08:03.064  [2024-12-15 10:45:51.934413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:03.064  [2024-12-15 10:45:51.934477] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097491 ]
00:08:03.064  EAL: No free 2048 kB hugepages reported on node 1
00:08:03.064  [2024-12-15 10:45:52.036426] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:03.323  [2024-12-15 10:45:52.133930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:03.323  [2024-12-15 10:45:52.134016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:03.323  [2024-12-15 10:45:52.134095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:08:03.323  [2024-12-15 10:45:52.134098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:04.702   10:45:53	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:04.702  
00:08:04.702  SPDK Configuration:
00:08:04.703  Core mask:      0xf
00:08:04.703  
00:08:04.703  Accel Perf Configuration:
00:08:04.703  Workload Type:  decompress
00:08:04.703  Transfer size:  4096 bytes
00:08:04.703  Vector count    1
00:08:04.703  Module:         software
00:08:04.703  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:04.703  Queue depth:    32
00:08:04.703  Allocate depth: 32
00:08:04.703  # threads/core: 1
00:08:04.703  Run time:       1 seconds
00:08:04.703  Verify:         Yes
00:08:04.703  
00:08:04.703  Running for 1 seconds...
00:08:04.703  
00:08:04.703  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:04.703  ------------------------------------------------------------------------------------
00:08:04.703  0,0                       50272/s         92 MiB/s                0                0
00:08:04.703  3,0                       50496/s         93 MiB/s                0                0
00:08:04.703  2,0                       71072/s        130 MiB/s                0                0
00:08:04.703  1,0                       50240/s         92 MiB/s                0                0
00:08:04.703  ====================================================================================
00:08:04.703  Total                    222080/s        867 MiB/s                0                0'
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703    10:45:53	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:04.703     10:45:53	-- accel/accel.sh@12 -- # build_accel_config
00:08:04.703    10:45:53	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -m 0xf
00:08:04.703     10:45:53	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:04.703     10:45:53	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:04.703     10:45:53	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:04.703     10:45:53	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:04.703     10:45:53	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:04.703     10:45:53	-- accel/accel.sh@41 -- # local IFS=,
00:08:04.703     10:45:53	-- accel/accel.sh@42 -- # jq -r .
00:08:04.703  [2024-12-15 10:45:53.398091] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:04.703  [2024-12-15 10:45:53.398159] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097683 ]
00:08:04.703  EAL: No free 2048 kB hugepages reported on node 1
00:08:04.703  [2024-12-15 10:45:53.501926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:04.703  [2024-12-15 10:45:53.601249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:04.703  [2024-12-15 10:45:53.601335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:04.703  [2024-12-15 10:45:53.601414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:08:04.703  [2024-12-15 10:45:53.601418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=0xf
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=decompress
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=software
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@23 -- # accel_module=software
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=32
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=32
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=1
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=Yes
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:04.703   10:45:53	-- accel/accel.sh@21 -- # val=
00:08:04.703   10:45:53	-- accel/accel.sh@22 -- # case "$var" in
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # IFS=:
00:08:04.703   10:45:53	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@21 -- # val=
00:08:06.083   10:45:54	-- accel/accel.sh@22 -- # case "$var" in
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # IFS=:
00:08:06.083   10:45:54	-- accel/accel.sh@20 -- # read -r var val
00:08:06.083   10:45:54	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:06.083   10:45:54	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:06.083   10:45:54	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:06.083  
00:08:06.083  real	0m2.944s
00:08:06.083  user	0m9.345s
00:08:06.083  sys	0m0.343s
00:08:06.083   10:45:54	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:06.083   10:45:54	-- common/autotest_common.sh@10 -- # set +x
00:08:06.083  ************************************
00:08:06.083  END TEST accel_decomp_mcore
00:08:06.083  ************************************
00:08:06.083   10:45:54	-- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:06.083   10:45:54	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:08:06.083   10:45:54	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:06.083   10:45:54	-- common/autotest_common.sh@10 -- # set +x
00:08:06.083  ************************************
00:08:06.083  START TEST accel_decomp_full_mcore
00:08:06.083  ************************************
00:08:06.083   10:45:54	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:06.083   10:45:54	-- accel/accel.sh@16 -- # local accel_opc
00:08:06.083   10:45:54	-- accel/accel.sh@17 -- # local accel_module
00:08:06.083    10:45:54	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:06.083    10:45:54	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:06.083     10:45:54	-- accel/accel.sh@12 -- # build_accel_config
00:08:06.083     10:45:54	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:06.083     10:45:54	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:06.083     10:45:54	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:06.083     10:45:54	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:06.083     10:45:54	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:06.083     10:45:54	-- accel/accel.sh@41 -- # local IFS=,
00:08:06.083     10:45:54	-- accel/accel.sh@42 -- # jq -r .
00:08:06.083  [2024-12-15 10:45:54.922004] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:06.083  [2024-12-15 10:45:54.922072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097880 ]
00:08:06.083  EAL: No free 2048 kB hugepages reported on node 1
00:08:06.083  [2024-12-15 10:45:55.027263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:06.342  [2024-12-15 10:45:55.128372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:06.342  [2024-12-15 10:45:55.128458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:06.342  [2024-12-15 10:45:55.128540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:08:06.342  [2024-12-15 10:45:55.128543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:07.720   10:45:56	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:07.720  
00:08:07.720  SPDK Configuration:
00:08:07.720  Core mask:      0xf
00:08:07.720  
00:08:07.720  Accel Perf Configuration:
00:08:07.720  Workload Type:  decompress
00:08:07.720  Transfer size:  111250 bytes
00:08:07.720  Vector count    1
00:08:07.720  Module:         software
00:08:07.721  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:07.721  Queue depth:    32
00:08:07.721  Allocate depth: 32
00:08:07.721  # threads/core: 1
00:08:07.721  Run time:       1 seconds
00:08:07.721  Verify:         Yes
00:08:07.721  
00:08:07.721  Running for 1 seconds...
00:08:07.721  
00:08:07.721  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:07.721  ------------------------------------------------------------------------------------
00:08:07.721  0,0                        3776/s        155 MiB/s                0                0
00:08:07.721  3,0                        3776/s        155 MiB/s                0                0
00:08:07.721  2,0                        5536/s        228 MiB/s                0                0
00:08:07.721  1,0                        3776/s        155 MiB/s                0                0
00:08:07.721  ====================================================================================
00:08:07.721  Total                     16864/s       1789 MiB/s                0                0'
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721    10:45:56	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:07.721     10:45:56	-- accel/accel.sh@12 -- # build_accel_config
00:08:07.721    10:45:56	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf
00:08:07.721     10:45:56	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:07.721     10:45:56	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:07.721     10:45:56	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:07.721     10:45:56	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:07.721     10:45:56	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:07.721     10:45:56	-- accel/accel.sh@41 -- # local IFS=,
00:08:07.721     10:45:56	-- accel/accel.sh@42 -- # jq -r .
00:08:07.721  [2024-12-15 10:45:56.424491] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:07.721  [2024-12-15 10:45:56.424565] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098071 ]
00:08:07.721  EAL: No free 2048 kB hugepages reported on node 1
00:08:07.721  [2024-12-15 10:45:56.530942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:08:07.721  [2024-12-15 10:45:56.631242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:07.721  [2024-12-15 10:45:56.631329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:07.721  [2024-12-15 10:45:56.631409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:08:07.721  [2024-12-15 10:45:56.631414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=0xf
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=decompress
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val='111250 bytes'
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=software
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@23 -- # accel_module=software
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=32
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=32
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=1
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=Yes
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:07.721   10:45:56	-- accel/accel.sh@21 -- # val=
00:08:07.721   10:45:56	-- accel/accel.sh@22 -- # case "$var" in
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # IFS=:
00:08:07.721   10:45:56	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@21 -- # val=
00:08:09.100   10:45:57	-- accel/accel.sh@22 -- # case "$var" in
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # IFS=:
00:08:09.100   10:45:57	-- accel/accel.sh@20 -- # read -r var val
00:08:09.100   10:45:57	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:09.100   10:45:57	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:09.100   10:45:57	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:09.100  
00:08:09.100  real	0m3.011s
00:08:09.100  user	0m9.528s
00:08:09.100  sys	0m0.371s
00:08:09.100   10:45:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:09.100   10:45:57	-- common/autotest_common.sh@10 -- # set +x
00:08:09.100  ************************************
00:08:09.100  END TEST accel_decomp_full_mcore
00:08:09.100  ************************************
00:08:09.100   10:45:57	-- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:09.100   10:45:57	-- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']'
00:08:09.100   10:45:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:09.100   10:45:57	-- common/autotest_common.sh@10 -- # set +x
00:08:09.100  ************************************
00:08:09.100  START TEST accel_decomp_mthread
00:08:09.100  ************************************
00:08:09.100   10:45:57	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:09.100   10:45:57	-- accel/accel.sh@16 -- # local accel_opc
00:08:09.100   10:45:57	-- accel/accel.sh@17 -- # local accel_module
00:08:09.100    10:45:57	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:09.100    10:45:57	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:09.100     10:45:57	-- accel/accel.sh@12 -- # build_accel_config
00:08:09.100     10:45:57	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:09.100     10:45:57	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:09.100     10:45:57	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:09.100     10:45:57	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:09.100     10:45:57	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:09.100     10:45:57	-- accel/accel.sh@41 -- # local IFS=,
00:08:09.100     10:45:57	-- accel/accel.sh@42 -- # jq -r .
00:08:09.100  [2024-12-15 10:45:57.978292] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:09.100  [2024-12-15 10:45:57.978360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098268 ]
00:08:09.100  EAL: No free 2048 kB hugepages reported on node 1
00:08:09.100  [2024-12-15 10:45:58.086172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:09.360  [2024-12-15 10:45:58.183967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:10.737   10:45:59	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:10.737  
00:08:10.737  SPDK Configuration:
00:08:10.737  Core mask:      0x1
00:08:10.737  
00:08:10.737  Accel Perf Configuration:
00:08:10.737  Workload Type:  decompress
00:08:10.737  Transfer size:  4096 bytes
00:08:10.737  Vector count    1
00:08:10.737  Module:         software
00:08:10.737  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:10.737  Queue depth:    32
00:08:10.737  Allocate depth: 32
00:08:10.737  # threads/core: 2
00:08:10.737  Run time:       1 seconds
00:08:10.737  Verify:         Yes
00:08:10.737  
00:08:10.737  Running for 1 seconds...
00:08:10.737  
00:08:10.737  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:10.737  ------------------------------------------------------------------------------------
00:08:10.737  0,1                       28864/s         53 MiB/s                0                0
00:08:10.737  0,0                       28736/s         52 MiB/s                0                0
00:08:10.737  ====================================================================================
00:08:10.737  Total                     57600/s        225 MiB/s                0                0'
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.737    10:45:59	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:10.737     10:45:59	-- accel/accel.sh@12 -- # build_accel_config
00:08:10.737    10:45:59	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -T 2
00:08:10.737     10:45:59	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:10.737     10:45:59	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:10.737     10:45:59	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:10.737     10:45:59	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:10.737     10:45:59	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:10.737     10:45:59	-- accel/accel.sh@41 -- # local IFS=,
00:08:10.737     10:45:59	-- accel/accel.sh@42 -- # jq -r .
00:08:10.737  [2024-12-15 10:45:59.461612] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:10.737  [2024-12-15 10:45:59.461684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098484 ]
00:08:10.737  EAL: No free 2048 kB hugepages reported on node 1
00:08:10.737  [2024-12-15 10:45:59.567685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:10.737  [2024-12-15 10:45:59.664634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:10.737   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.737   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.737   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.737   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.737   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.737   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.737   10:45:59	-- accel/accel.sh@21 -- # val=0x1
00:08:10.737   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.737   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.737   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=decompress
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val='4096 bytes'
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=software
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@23 -- # accel_module=software
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=32
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=32
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=2
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=Yes
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:10.738   10:45:59	-- accel/accel.sh@21 -- # val=
00:08:10.738   10:45:59	-- accel/accel.sh@22 -- # case "$var" in
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # IFS=:
00:08:10.738   10:45:59	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@21 -- # val=
00:08:12.117   10:46:00	-- accel/accel.sh@22 -- # case "$var" in
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # IFS=:
00:08:12.117   10:46:00	-- accel/accel.sh@20 -- # read -r var val
00:08:12.117   10:46:00	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:12.117   10:46:00	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:12.117   10:46:00	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:12.117  
00:08:12.117  real	0m2.972s
00:08:12.117  user	0m2.636s
00:08:12.117  sys	0m0.339s
00:08:12.117   10:46:00	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:12.117   10:46:00	-- common/autotest_common.sh@10 -- # set +x
00:08:12.117  ************************************
00:08:12.117  END TEST accel_decomp_mthread
00:08:12.117  ************************************
00:08:12.117   10:46:00	-- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:12.117   10:46:00	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:08:12.117   10:46:00	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:12.117   10:46:00	-- common/autotest_common.sh@10 -- # set +x
00:08:12.117  ************************************
00:08:12.117  START TEST accel_deomp_full_mthread
00:08:12.117  ************************************
00:08:12.117   10:46:00	-- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:12.117   10:46:00	-- accel/accel.sh@16 -- # local accel_opc
00:08:12.117   10:46:00	-- accel/accel.sh@17 -- # local accel_module
00:08:12.117    10:46:00	-- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:12.117    10:46:00	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:12.117     10:46:00	-- accel/accel.sh@12 -- # build_accel_config
00:08:12.117     10:46:00	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:12.117     10:46:00	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:12.117     10:46:00	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:12.117     10:46:00	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:12.117     10:46:00	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:12.117     10:46:00	-- accel/accel.sh@41 -- # local IFS=,
00:08:12.117     10:46:00	-- accel/accel.sh@42 -- # jq -r .
00:08:12.117  [2024-12-15 10:46:00.984516] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:12.117  [2024-12-15 10:46:00.984567] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098764 ]
00:08:12.117  EAL: No free 2048 kB hugepages reported on node 1
00:08:12.117  [2024-12-15 10:46:01.073041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:12.376  [2024-12-15 10:46:01.167897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:13.756   10:46:02	-- accel/accel.sh@18 -- # out='Preparing input file...
00:08:13.756  
00:08:13.756  SPDK Configuration:
00:08:13.756  Core mask:      0x1
00:08:13.756  
00:08:13.756  Accel Perf Configuration:
00:08:13.756  Workload Type:  decompress
00:08:13.756  Transfer size:  111250 bytes
00:08:13.756  Vector count    1
00:08:13.756  Module:         software
00:08:13.756  File Name:      /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:13.756  Queue depth:    32
00:08:13.756  Allocate depth: 32
00:08:13.756  # threads/core: 2
00:08:13.756  Run time:       1 seconds
00:08:13.756  Verify:         Yes
00:08:13.756  
00:08:13.756  Running for 1 seconds...
00:08:13.756  
00:08:13.756  Core,Thread             Transfers        Bandwidth           Failed      Miscompares
00:08:13.756  ------------------------------------------------------------------------------------
00:08:13.756  0,1                        1952/s         80 MiB/s                0                0
00:08:13.756  0,0                        1920/s         79 MiB/s                0                0
00:08:13.756  ====================================================================================
00:08:13.756  Total                      3872/s        410 MiB/s                0                0'
00:08:13.756    10:46:02	-- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756    10:46:02	-- accel/accel.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2
00:08:13.756     10:46:02	-- accel/accel.sh@12 -- # build_accel_config
00:08:13.756     10:46:02	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:13.756     10:46:02	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:13.756     10:46:02	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:13.756     10:46:02	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:13.756     10:46:02	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:13.756     10:46:02	-- accel/accel.sh@41 -- # local IFS=,
00:08:13.756     10:46:02	-- accel/accel.sh@42 -- # jq -r .
00:08:13.756  [2024-12-15 10:46:02.456418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:13.756  [2024-12-15 10:46:02.456486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2098995 ]
00:08:13.756  EAL: No free 2048 kB hugepages reported on node 1
00:08:13.756  [2024-12-15 10:46:02.548540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:13.756  [2024-12-15 10:46:02.643375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=0x1
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=decompress
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@24 -- # accel_opc=decompress
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val='111250 bytes'
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=software
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@23 -- # accel_module=software
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/bib
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=32
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=32
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=2
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val='1 seconds'
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=Yes
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:13.756   10:46:02	-- accel/accel.sh@21 -- # val=
00:08:13.756   10:46:02	-- accel/accel.sh@22 -- # case "$var" in
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # IFS=:
00:08:13.756   10:46:02	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@21 -- # val=
00:08:15.136   10:46:03	-- accel/accel.sh@22 -- # case "$var" in
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # IFS=:
00:08:15.136   10:46:03	-- accel/accel.sh@20 -- # read -r var val
00:08:15.136   10:46:03	-- accel/accel.sh@28 -- # [[ -n software ]]
00:08:15.136   10:46:03	-- accel/accel.sh@28 -- # [[ -n decompress ]]
00:08:15.136   10:46:03	-- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:08:15.136  
00:08:15.136  real	0m2.940s
00:08:15.136  user	0m2.653s
00:08:15.136  sys	0m0.292s
00:08:15.136   10:46:03	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:15.136   10:46:03	-- common/autotest_common.sh@10 -- # set +x
00:08:15.136  ************************************
00:08:15.136  END TEST accel_deomp_full_mthread
00:08:15.136  ************************************
00:08:15.136   10:46:03	-- accel/accel.sh@116 -- # [[ n == y ]]
00:08:15.136   10:46:03	-- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62
00:08:15.136   10:46:03	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:08:15.136    10:46:03	-- accel/accel.sh@129 -- # build_accel_config
00:08:15.136   10:46:03	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:15.136    10:46:03	-- accel/accel.sh@32 -- # accel_json_cfg=()
00:08:15.136   10:46:03	-- common/autotest_common.sh@10 -- # set +x
00:08:15.136    10:46:03	-- accel/accel.sh@33 -- # [[ 0 -gt 0 ]]
00:08:15.136    10:46:03	-- accel/accel.sh@34 -- # [[ 0 -gt 0 ]]
00:08:15.136    10:46:03	-- accel/accel.sh@35 -- # [[ 0 -gt 0 ]]
00:08:15.136    10:46:03	-- accel/accel.sh@37 -- # [[ -n '' ]]
00:08:15.136    10:46:03	-- accel/accel.sh@41 -- # local IFS=,
00:08:15.136    10:46:03	-- accel/accel.sh@42 -- # jq -r .
00:08:15.136  ************************************
00:08:15.136  START TEST accel_dif_functional_tests
00:08:15.136  ************************************
00:08:15.136   10:46:03	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62
00:08:15.136  [2024-12-15 10:46:03.999769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:15.136  [2024-12-15 10:46:03.999844] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099199 ]
00:08:15.136  EAL: No free 2048 kB hugepages reported on node 1
00:08:15.136  [2024-12-15 10:46:04.106213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:15.396  [2024-12-15 10:46:04.206391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:15.396  [2024-12-15 10:46:04.206476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:15.396  [2024-12-15 10:46:04.206481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:15.396  [2024-12-15 10:46:04.405994] 'OCF_Core' volume operations registered
00:08:15.396  [2024-12-15 10:46:04.409465] 'OCF_Cache' volume operations registered
00:08:15.655  [2024-12-15 10:46:04.413432] 'OCF Composite' volume operations registered
00:08:15.655  [2024-12-15 10:46:04.416936] 'SPDK_block_device' volume operations registered
00:08:15.655  
00:08:15.655  
00:08:15.655       CUnit - A unit testing framework for C - Version 2.1-3
00:08:15.655       http://cunit.sourceforge.net/
00:08:15.655  
00:08:15.655  
00:08:15.655  Suite: accel_dif
00:08:15.655    Test: verify: DIF generated, GUARD check ...passed
00:08:15.655    Test: verify: DIF generated, APPTAG check ...passed
00:08:15.655    Test: verify: DIF generated, REFTAG check ...passed
00:08:15.655    Test: verify: DIF not generated, GUARD check ...[2024-12-15 10:46:04.421389] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:08:15.655  [2024-12-15 10:46:04.421440] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10,  Expected=5a5a, Actual=7867
00:08:15.655  passed
00:08:15.655    Test: verify: DIF not generated, APPTAG check ...[2024-12-15 10:46:04.421479] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:08:15.655  [2024-12-15 10:46:04.421503] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10,  Expected=14, Actual=5a5a
00:08:15.655  passed
00:08:15.655    Test: verify: DIF not generated, REFTAG check ...[2024-12-15 10:46:04.421533] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:08:15.655  [2024-12-15 10:46:04.421557] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a
00:08:15.655  passed
00:08:15.655    Test: verify: APPTAG correct, APPTAG check ...passed
00:08:15.655    Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-15 10:46:04.421629] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30,  Expected=28, Actual=14
00:08:15.655  passed
00:08:15.655    Test: verify: APPTAG incorrect, no APPTAG check ...passed
00:08:15.655    Test: verify: REFTAG incorrect, REFTAG ignore ...passed
00:08:15.655    Test: verify: REFTAG_INIT correct, REFTAG check ...passed
00:08:15.655    Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-15 10:46:04.421789] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10
00:08:15.655  passed
00:08:15.655    Test: generate copy: DIF generated, GUARD check ...passed
00:08:15.655    Test: generate copy: DIF generated, APTTAG check ...passed
00:08:15.655    Test: generate copy: DIF generated, REFTAG check ...passed
00:08:15.655    Test: generate copy: DIF generated, no GUARD check flag set ...passed
00:08:15.655    Test: generate copy: DIF generated, no APPTAG check flag set ...passed
00:08:15.655    Test: generate copy: DIF generated, no REFTAG check flag set ...passed
00:08:15.655    Test: generate copy: iovecs-len validate ...[2024-12-15 10:46:04.422045] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size.
00:08:15.655  passed
00:08:15.655    Test: generate copy: buffer alignment validate ...passed
00:08:15.655  
00:08:15.655  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:15.655                suites      1      1    n/a      0        0
00:08:15.655                 tests     20     20     20      0        0
00:08:15.655               asserts    204    204    204      0      n/a
00:08:15.655  
00:08:15.655  Elapsed time =    0.003 seconds
00:08:15.915  
00:08:15.915  real	0m0.870s
00:08:15.915  user	0m1.556s
00:08:15.915  sys	0m0.308s
00:08:15.915   10:46:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:15.915   10:46:04	-- common/autotest_common.sh@10 -- # set +x
00:08:15.915  ************************************
00:08:15.915  END TEST accel_dif_functional_tests
00:08:15.915  ************************************
00:08:15.915  
00:08:15.915  real	1m4.008s
00:08:15.915  user	1m10.717s
00:08:15.915  sys	0m8.985s
00:08:15.915   10:46:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:15.915   10:46:04	-- common/autotest_common.sh@10 -- # set +x
00:08:15.915  ************************************
00:08:15.915  END TEST accel
00:08:15.915  ************************************
00:08:15.915   10:46:04	-- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh
00:08:15.915   10:46:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:15.915   10:46:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:15.915   10:46:04	-- common/autotest_common.sh@10 -- # set +x
00:08:15.915  ************************************
00:08:15.915  START TEST accel_rpc
00:08:15.915  ************************************
00:08:15.915   10:46:04	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel/accel_rpc.sh
00:08:16.176  * Looking for test storage...
00:08:16.176  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/accel
00:08:16.176    10:46:05	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:16.176     10:46:05	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:16.176     10:46:05	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:16.176    10:46:05	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:16.176    10:46:05	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:16.176    10:46:05	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:16.176    10:46:05	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:16.176    10:46:05	-- scripts/common.sh@335 -- # IFS=.-:
00:08:16.176    10:46:05	-- scripts/common.sh@335 -- # read -ra ver1
00:08:16.176    10:46:05	-- scripts/common.sh@336 -- # IFS=.-:
00:08:16.176    10:46:05	-- scripts/common.sh@336 -- # read -ra ver2
00:08:16.176    10:46:05	-- scripts/common.sh@337 -- # local 'op=<'
00:08:16.176    10:46:05	-- scripts/common.sh@339 -- # ver1_l=2
00:08:16.176    10:46:05	-- scripts/common.sh@340 -- # ver2_l=1
00:08:16.176    10:46:05	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:16.176    10:46:05	-- scripts/common.sh@343 -- # case "$op" in
00:08:16.176    10:46:05	-- scripts/common.sh@344 -- # : 1
00:08:16.176    10:46:05	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:16.176    10:46:05	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:16.176     10:46:05	-- scripts/common.sh@364 -- # decimal 1
00:08:16.176     10:46:05	-- scripts/common.sh@352 -- # local d=1
00:08:16.176     10:46:05	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:16.176     10:46:05	-- scripts/common.sh@354 -- # echo 1
00:08:16.176    10:46:05	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:16.176     10:46:05	-- scripts/common.sh@365 -- # decimal 2
00:08:16.176     10:46:05	-- scripts/common.sh@352 -- # local d=2
00:08:16.176     10:46:05	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:16.176     10:46:05	-- scripts/common.sh@354 -- # echo 2
00:08:16.176    10:46:05	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:16.176    10:46:05	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:16.176    10:46:05	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:16.176    10:46:05	-- scripts/common.sh@367 -- # return 0
00:08:16.176    10:46:05	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:16.176    10:46:05	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:16.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:16.176  		--rc genhtml_branch_coverage=1
00:08:16.176  		--rc genhtml_function_coverage=1
00:08:16.176  		--rc genhtml_legend=1
00:08:16.176  		--rc geninfo_all_blocks=1
00:08:16.176  		--rc geninfo_unexecuted_blocks=1
00:08:16.176  		
00:08:16.176  		'
00:08:16.176    10:46:05	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:16.176  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:16.176  		--rc genhtml_branch_coverage=1
00:08:16.176  		--rc genhtml_function_coverage=1
00:08:16.176  		--rc genhtml_legend=1
00:08:16.176  		--rc geninfo_all_blocks=1
00:08:16.176  		--rc geninfo_unexecuted_blocks=1
00:08:16.176  		
00:08:16.177  		'
00:08:16.177    10:46:05	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:16.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:16.177  		--rc genhtml_branch_coverage=1
00:08:16.177  		--rc genhtml_function_coverage=1
00:08:16.177  		--rc genhtml_legend=1
00:08:16.177  		--rc geninfo_all_blocks=1
00:08:16.177  		--rc geninfo_unexecuted_blocks=1
00:08:16.177  		
00:08:16.177  		'
00:08:16.177    10:46:05	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:16.177  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:16.177  		--rc genhtml_branch_coverage=1
00:08:16.177  		--rc genhtml_function_coverage=1
00:08:16.177  		--rc genhtml_legend=1
00:08:16.177  		--rc geninfo_all_blocks=1
00:08:16.177  		--rc geninfo_unexecuted_blocks=1
00:08:16.177  		
00:08:16.177  		'
00:08:16.177   10:46:05	-- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:08:16.177   10:46:05	-- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2099440
00:08:16.177   10:46:05	-- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc
00:08:16.177   10:46:05	-- accel/accel_rpc.sh@15 -- # waitforlisten 2099440
00:08:16.177   10:46:05	-- common/autotest_common.sh@829 -- # '[' -z 2099440 ']'
00:08:16.177   10:46:05	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:16.177   10:46:05	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:16.177   10:46:05	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:16.177  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:16.177   10:46:05	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:16.177   10:46:05	-- common/autotest_common.sh@10 -- # set +x
00:08:16.177  [2024-12-15 10:46:05.176451] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:16.177  [2024-12-15 10:46:05.176527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099440 ]
00:08:16.440  EAL: No free 2048 kB hugepages reported on node 1
00:08:16.440  [2024-12-15 10:46:05.278829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:16.440  [2024-12-15 10:46:05.380704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:16.440  [2024-12-15 10:46:05.380859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:17.378   10:46:06	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:17.378   10:46:06	-- common/autotest_common.sh@862 -- # return 0
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@45 -- # [[ y == y ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@49 -- # [[ y == y ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite
00:08:17.378   10:46:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:17.378   10:46:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:17.378   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.378  ************************************
00:08:17.378  START TEST accel_assign_opcode
00:08:17.378  ************************************
00:08:17.378   10:46:06	-- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect
00:08:17.378   10:46:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:17.378   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.378  [2024-12-15 10:46:06.143189] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect
00:08:17.378   10:46:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software
00:08:17.378   10:46:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:17.378   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.378  [2024-12-15 10:46:06.151207] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software
00:08:17.378   10:46:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:17.378   10:46:06	-- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init
00:08:17.378   10:46:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:17.378   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.378  [2024-12-15 10:46:06.359428] 'OCF_Core' volume operations registered
00:08:17.378  [2024-12-15 10:46:06.362706] 'OCF_Cache' volume operations registered
00:08:17.378  [2024-12-15 10:46:06.366393] 'OCF Composite' volume operations registered
00:08:17.378  [2024-12-15 10:46:06.369681] 'SPDK_block_device' volume operations registered
00:08:17.637   10:46:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:17.637   10:46:06	-- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments
00:08:17.637   10:46:06	-- accel/accel_rpc.sh@42 -- # jq -r .copy
00:08:17.637   10:46:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:17.637   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.637   10:46:06	-- accel/accel_rpc.sh@42 -- # grep software
00:08:17.637   10:46:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:17.637  software
00:08:17.637  
00:08:17.637  real	0m0.398s
00:08:17.637  user	0m0.046s
00:08:17.637  sys	0m0.016s
00:08:17.637   10:46:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:17.637   10:46:06	-- common/autotest_common.sh@10 -- # set +x
00:08:17.637  ************************************
00:08:17.637  END TEST accel_assign_opcode
00:08:17.637  ************************************
00:08:17.637   10:46:06	-- accel/accel_rpc.sh@55 -- # killprocess 2099440
00:08:17.637   10:46:06	-- common/autotest_common.sh@936 -- # '[' -z 2099440 ']'
00:08:17.637   10:46:06	-- common/autotest_common.sh@940 -- # kill -0 2099440
00:08:17.637    10:46:06	-- common/autotest_common.sh@941 -- # uname
00:08:17.637   10:46:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:17.637    10:46:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2099440
00:08:17.637   10:46:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:17.637   10:46:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:17.637   10:46:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2099440'
00:08:17.637  killing process with pid 2099440
00:08:17.637   10:46:06	-- common/autotest_common.sh@955 -- # kill 2099440
00:08:17.637   10:46:06	-- common/autotest_common.sh@960 -- # wait 2099440
00:08:18.205  
00:08:18.205  real	0m2.267s
00:08:18.205  user	0m2.247s
00:08:18.205  sys	0m0.651s
00:08:18.205   10:46:07	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:18.205   10:46:07	-- common/autotest_common.sh@10 -- # set +x
00:08:18.205  ************************************
00:08:18.205  END TEST accel_rpc
00:08:18.205  ************************************
00:08:18.464   10:46:07	-- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh
00:08:18.464   10:46:07	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:18.464   10:46:07	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:18.464   10:46:07	-- common/autotest_common.sh@10 -- # set +x
00:08:18.464  ************************************
00:08:18.464  START TEST app_cmdline
00:08:18.464  ************************************
00:08:18.464   10:46:07	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/cmdline.sh
00:08:18.464  * Looking for test storage...
00:08:18.464  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app
00:08:18.464    10:46:07	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:18.464     10:46:07	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:18.464     10:46:07	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:18.464    10:46:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:18.464    10:46:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:18.464    10:46:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:18.465    10:46:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:18.465    10:46:07	-- scripts/common.sh@335 -- # IFS=.-:
00:08:18.465    10:46:07	-- scripts/common.sh@335 -- # read -ra ver1
00:08:18.465    10:46:07	-- scripts/common.sh@336 -- # IFS=.-:
00:08:18.465    10:46:07	-- scripts/common.sh@336 -- # read -ra ver2
00:08:18.465    10:46:07	-- scripts/common.sh@337 -- # local 'op=<'
00:08:18.465    10:46:07	-- scripts/common.sh@339 -- # ver1_l=2
00:08:18.465    10:46:07	-- scripts/common.sh@340 -- # ver2_l=1
00:08:18.465    10:46:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:18.465    10:46:07	-- scripts/common.sh@343 -- # case "$op" in
00:08:18.465    10:46:07	-- scripts/common.sh@344 -- # : 1
00:08:18.465    10:46:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:18.465    10:46:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:18.465     10:46:07	-- scripts/common.sh@364 -- # decimal 1
00:08:18.465     10:46:07	-- scripts/common.sh@352 -- # local d=1
00:08:18.465     10:46:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:18.465     10:46:07	-- scripts/common.sh@354 -- # echo 1
00:08:18.465    10:46:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:18.465     10:46:07	-- scripts/common.sh@365 -- # decimal 2
00:08:18.465     10:46:07	-- scripts/common.sh@352 -- # local d=2
00:08:18.465     10:46:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:18.465     10:46:07	-- scripts/common.sh@354 -- # echo 2
00:08:18.465    10:46:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:18.465    10:46:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:18.465    10:46:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:18.465    10:46:07	-- scripts/common.sh@367 -- # return 0
00:08:18.465    10:46:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:18.465    10:46:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:18.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:18.465  		--rc genhtml_branch_coverage=1
00:08:18.465  		--rc genhtml_function_coverage=1
00:08:18.465  		--rc genhtml_legend=1
00:08:18.465  		--rc geninfo_all_blocks=1
00:08:18.465  		--rc geninfo_unexecuted_blocks=1
00:08:18.465  		
00:08:18.465  		'
00:08:18.465    10:46:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:18.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:18.465  		--rc genhtml_branch_coverage=1
00:08:18.465  		--rc genhtml_function_coverage=1
00:08:18.465  		--rc genhtml_legend=1
00:08:18.465  		--rc geninfo_all_blocks=1
00:08:18.465  		--rc geninfo_unexecuted_blocks=1
00:08:18.465  		
00:08:18.465  		'
00:08:18.465    10:46:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:18.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:18.465  		--rc genhtml_branch_coverage=1
00:08:18.465  		--rc genhtml_function_coverage=1
00:08:18.465  		--rc genhtml_legend=1
00:08:18.465  		--rc geninfo_all_blocks=1
00:08:18.465  		--rc geninfo_unexecuted_blocks=1
00:08:18.465  		
00:08:18.465  		'
00:08:18.465    10:46:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:18.465  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:18.465  		--rc genhtml_branch_coverage=1
00:08:18.465  		--rc genhtml_function_coverage=1
00:08:18.465  		--rc genhtml_legend=1
00:08:18.465  		--rc geninfo_all_blocks=1
00:08:18.465  		--rc geninfo_unexecuted_blocks=1
00:08:18.465  		
00:08:18.465  		'
00:08:18.465   10:46:07	-- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:08:18.465   10:46:07	-- app/cmdline.sh@17 -- # spdk_tgt_pid=2099706
00:08:18.465   10:46:07	-- app/cmdline.sh@18 -- # waitforlisten 2099706
00:08:18.465   10:46:07	-- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:08:18.465   10:46:07	-- common/autotest_common.sh@829 -- # '[' -z 2099706 ']'
00:08:18.465   10:46:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:18.465   10:46:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:18.465   10:46:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:18.465  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:18.465   10:46:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:18.465   10:46:07	-- common/autotest_common.sh@10 -- # set +x
00:08:18.465  [2024-12-15 10:46:07.473966] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:18.465  [2024-12-15 10:46:07.474048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2099706 ]
00:08:18.724  EAL: No free 2048 kB hugepages reported on node 1
00:08:18.724  [2024-12-15 10:46:07.580167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:18.724  [2024-12-15 10:46:07.684677] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:18.724  [2024-12-15 10:46:07.684835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:18.983  [2024-12-15 10:46:07.874761] 'OCF_Core' volume operations registered
00:08:18.983  [2024-12-15 10:46:07.878237] 'OCF_Cache' volume operations registered
00:08:18.983  [2024-12-15 10:46:07.882198] 'OCF Composite' volume operations registered
00:08:18.983  [2024-12-15 10:46:07.885774] 'SPDK_block_device' volume operations registered
00:08:19.551   10:46:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:19.551   10:46:08	-- common/autotest_common.sh@862 -- # return 0
00:08:19.551   10:46:08	-- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:08:19.809  {
00:08:19.809    "version": "SPDK v24.01.1-pre git sha1 c13c99a5e",
00:08:19.809    "fields": {
00:08:19.809      "major": 24,
00:08:19.809      "minor": 1,
00:08:19.809      "patch": 1,
00:08:19.809      "suffix": "-pre",
00:08:19.809      "commit": "c13c99a5e"
00:08:19.809    }
00:08:19.809  }
00:08:19.809   10:46:08	-- app/cmdline.sh@22 -- # expected_methods=()
00:08:19.809   10:46:08	-- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:08:19.809   10:46:08	-- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:08:19.809   10:46:08	-- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:08:19.809    10:46:08	-- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:08:19.809    10:46:08	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:19.809    10:46:08	-- common/autotest_common.sh@10 -- # set +x
00:08:19.809    10:46:08	-- app/cmdline.sh@26 -- # jq -r '.[]'
00:08:19.809    10:46:08	-- app/cmdline.sh@26 -- # sort
00:08:19.809    10:46:08	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:19.809   10:46:08	-- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:08:19.809   10:46:08	-- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:08:19.809   10:46:08	-- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:19.809   10:46:08	-- common/autotest_common.sh@650 -- # local es=0
00:08:19.809   10:46:08	-- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:19.809   10:46:08	-- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:08:19.809   10:46:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:19.809    10:46:08	-- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:08:19.809   10:46:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:19.809    10:46:08	-- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:08:19.809   10:46:08	-- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in
00:08:19.809   10:46:08	-- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:08:19.809   10:46:08	-- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py ]]
00:08:19.810   10:46:08	-- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:08:20.068  request:
00:08:20.068  {
00:08:20.068    "method": "env_dpdk_get_mem_stats",
00:08:20.068    "req_id": 1
00:08:20.068  }
00:08:20.068  Got JSON-RPC error response
00:08:20.068  response:
00:08:20.068  {
00:08:20.068    "code": -32601,
00:08:20.068    "message": "Method not found"
00:08:20.068  }
00:08:20.068   10:46:08	-- common/autotest_common.sh@653 -- # es=1
00:08:20.068   10:46:08	-- common/autotest_common.sh@661 -- # (( es > 128 ))
00:08:20.068   10:46:08	-- common/autotest_common.sh@672 -- # [[ -n '' ]]
00:08:20.068   10:46:08	-- common/autotest_common.sh@677 -- # (( !es == 0 ))
00:08:20.068   10:46:08	-- app/cmdline.sh@1 -- # killprocess 2099706
00:08:20.068   10:46:08	-- common/autotest_common.sh@936 -- # '[' -z 2099706 ']'
00:08:20.068   10:46:08	-- common/autotest_common.sh@940 -- # kill -0 2099706
00:08:20.068    10:46:08	-- common/autotest_common.sh@941 -- # uname
00:08:20.068   10:46:08	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:20.068    10:46:08	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2099706
00:08:20.068   10:46:08	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:20.068   10:46:08	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:20.068   10:46:08	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2099706'
00:08:20.068  killing process with pid 2099706
00:08:20.068   10:46:08	-- common/autotest_common.sh@955 -- # kill 2099706
00:08:20.068   10:46:08	-- common/autotest_common.sh@960 -- # wait 2099706
00:08:20.636  
00:08:20.636  real	0m2.244s
00:08:20.636  user	0m2.562s
00:08:20.636  sys	0m0.695s
00:08:20.636   10:46:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:20.636   10:46:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.636  ************************************
00:08:20.636  END TEST app_cmdline
00:08:20.636  ************************************
00:08:20.636   10:46:09	-- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh
00:08:20.636   10:46:09	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:08:20.636   10:46:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:20.636   10:46:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.636  ************************************
00:08:20.636  START TEST version
00:08:20.636  ************************************
00:08:20.636   10:46:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/version.sh
00:08:20.636  * Looking for test storage...
00:08:20.636  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app
00:08:20.636    10:46:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:20.636     10:46:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:20.636     10:46:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:20.896    10:46:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:20.896    10:46:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:20.896    10:46:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:20.896    10:46:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:20.896    10:46:09	-- scripts/common.sh@335 -- # IFS=.-:
00:08:20.896    10:46:09	-- scripts/common.sh@335 -- # read -ra ver1
00:08:20.896    10:46:09	-- scripts/common.sh@336 -- # IFS=.-:
00:08:20.896    10:46:09	-- scripts/common.sh@336 -- # read -ra ver2
00:08:20.896    10:46:09	-- scripts/common.sh@337 -- # local 'op=<'
00:08:20.896    10:46:09	-- scripts/common.sh@339 -- # ver1_l=2
00:08:20.896    10:46:09	-- scripts/common.sh@340 -- # ver2_l=1
00:08:20.896    10:46:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:20.896    10:46:09	-- scripts/common.sh@343 -- # case "$op" in
00:08:20.896    10:46:09	-- scripts/common.sh@344 -- # : 1
00:08:20.896    10:46:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:20.896    10:46:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:20.896     10:46:09	-- scripts/common.sh@364 -- # decimal 1
00:08:20.896     10:46:09	-- scripts/common.sh@352 -- # local d=1
00:08:20.896     10:46:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:20.896     10:46:09	-- scripts/common.sh@354 -- # echo 1
00:08:20.896    10:46:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:20.896     10:46:09	-- scripts/common.sh@365 -- # decimal 2
00:08:20.896     10:46:09	-- scripts/common.sh@352 -- # local d=2
00:08:20.896     10:46:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:20.896     10:46:09	-- scripts/common.sh@354 -- # echo 2
00:08:20.896    10:46:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:20.896    10:46:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:20.896    10:46:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:20.896    10:46:09	-- scripts/common.sh@367 -- # return 0
00:08:20.896    10:46:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:20.896    10:46:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:20.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.896  		--rc genhtml_branch_coverage=1
00:08:20.896  		--rc genhtml_function_coverage=1
00:08:20.896  		--rc genhtml_legend=1
00:08:20.896  		--rc geninfo_all_blocks=1
00:08:20.896  		--rc geninfo_unexecuted_blocks=1
00:08:20.896  		
00:08:20.896  		'
00:08:20.896    10:46:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:20.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.896  		--rc genhtml_branch_coverage=1
00:08:20.896  		--rc genhtml_function_coverage=1
00:08:20.896  		--rc genhtml_legend=1
00:08:20.896  		--rc geninfo_all_blocks=1
00:08:20.896  		--rc geninfo_unexecuted_blocks=1
00:08:20.896  		
00:08:20.896  		'
00:08:20.896    10:46:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:20.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.896  		--rc genhtml_branch_coverage=1
00:08:20.896  		--rc genhtml_function_coverage=1
00:08:20.896  		--rc genhtml_legend=1
00:08:20.896  		--rc geninfo_all_blocks=1
00:08:20.896  		--rc geninfo_unexecuted_blocks=1
00:08:20.896  		
00:08:20.896  		'
00:08:20.896    10:46:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:20.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:20.896  		--rc genhtml_branch_coverage=1
00:08:20.896  		--rc genhtml_function_coverage=1
00:08:20.896  		--rc genhtml_legend=1
00:08:20.896  		--rc geninfo_all_blocks=1
00:08:20.896  		--rc geninfo_unexecuted_blocks=1
00:08:20.896  		
00:08:20.896  		'
00:08:20.896    10:46:09	-- app/version.sh@17 -- # get_header_version major
00:08:20.896    10:46:09	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:08:20.896    10:46:09	-- app/version.sh@14 -- # cut -f2
00:08:20.896    10:46:09	-- app/version.sh@14 -- # tr -d '"'
00:08:20.896   10:46:09	-- app/version.sh@17 -- # major=24
00:08:20.896    10:46:09	-- app/version.sh@18 -- # get_header_version minor
00:08:20.896    10:46:09	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:08:20.896    10:46:09	-- app/version.sh@14 -- # cut -f2
00:08:20.896    10:46:09	-- app/version.sh@14 -- # tr -d '"'
00:08:20.896   10:46:09	-- app/version.sh@18 -- # minor=1
00:08:20.896    10:46:09	-- app/version.sh@19 -- # get_header_version patch
00:08:20.896    10:46:09	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:08:20.896    10:46:09	-- app/version.sh@14 -- # cut -f2
00:08:20.896    10:46:09	-- app/version.sh@14 -- # tr -d '"'
00:08:20.896   10:46:09	-- app/version.sh@19 -- # patch=1
00:08:20.896    10:46:09	-- app/version.sh@20 -- # get_header_version suffix
00:08:20.896    10:46:09	-- app/version.sh@14 -- # tr -d '"'
00:08:20.896    10:46:09	-- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvme-phy-autotest/spdk/include/spdk/version.h
00:08:20.896    10:46:09	-- app/version.sh@14 -- # cut -f2
00:08:20.896   10:46:09	-- app/version.sh@20 -- # suffix=-pre
00:08:20.896   10:46:09	-- app/version.sh@22 -- # version=24.1
00:08:20.896   10:46:09	-- app/version.sh@25 -- # (( patch != 0 ))
00:08:20.896   10:46:09	-- app/version.sh@25 -- # version=24.1.1
00:08:20.896   10:46:09	-- app/version.sh@28 -- # version=24.1.1rc0
00:08:20.896   10:46:09	-- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python:/var/jenkins/workspace/nvme-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvme-phy-autotest/spdk/python
00:08:20.896    10:46:09	-- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:08:20.896   10:46:09	-- app/version.sh@30 -- # py_version=24.1.1rc0
00:08:20.896   10:46:09	-- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]]
00:08:20.896  
00:08:20.896  real	0m0.270s
00:08:20.896  user	0m0.163s
00:08:20.897  sys	0m0.158s
00:08:20.897   10:46:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:20.897   10:46:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.897  ************************************
00:08:20.897  END TEST version
00:08:20.897  ************************************
00:08:20.897   10:46:09	-- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']'
00:08:20.897    10:46:09	-- spdk/autotest.sh@191 -- # uname -s
00:08:20.897   10:46:09	-- spdk/autotest.sh@191 -- # [[ Linux == Linux ]]
00:08:20.897   10:46:09	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:08:20.897   10:46:09	-- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]]
00:08:20.897   10:46:09	-- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']'
00:08:20.897   10:46:09	-- spdk/autotest.sh@205 -- # run_test blockdev_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme
00:08:20.897   10:46:09	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:08:20.897   10:46:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:20.897   10:46:09	-- common/autotest_common.sh@10 -- # set +x
00:08:20.897  ************************************
00:08:20.897  START TEST blockdev_nvme
00:08:20.897  ************************************
00:08:20.897   10:46:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh nvme
00:08:21.155  * Looking for test storage...
00:08:21.156  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev
00:08:21.156    10:46:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:08:21.156     10:46:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:08:21.156     10:46:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:08:21.156    10:46:10	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:08:21.156    10:46:10	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:08:21.156    10:46:10	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:08:21.156    10:46:10	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:08:21.156    10:46:10	-- scripts/common.sh@335 -- # IFS=.-:
00:08:21.156    10:46:10	-- scripts/common.sh@335 -- # read -ra ver1
00:08:21.156    10:46:10	-- scripts/common.sh@336 -- # IFS=.-:
00:08:21.156    10:46:10	-- scripts/common.sh@336 -- # read -ra ver2
00:08:21.156    10:46:10	-- scripts/common.sh@337 -- # local 'op=<'
00:08:21.156    10:46:10	-- scripts/common.sh@339 -- # ver1_l=2
00:08:21.156    10:46:10	-- scripts/common.sh@340 -- # ver2_l=1
00:08:21.156    10:46:10	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:08:21.156    10:46:10	-- scripts/common.sh@343 -- # case "$op" in
00:08:21.156    10:46:10	-- scripts/common.sh@344 -- # : 1
00:08:21.156    10:46:10	-- scripts/common.sh@363 -- # (( v = 0 ))
00:08:21.156    10:46:10	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:21.156     10:46:10	-- scripts/common.sh@364 -- # decimal 1
00:08:21.156     10:46:10	-- scripts/common.sh@352 -- # local d=1
00:08:21.156     10:46:10	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:21.156     10:46:10	-- scripts/common.sh@354 -- # echo 1
00:08:21.156    10:46:10	-- scripts/common.sh@364 -- # ver1[v]=1
00:08:21.156     10:46:10	-- scripts/common.sh@365 -- # decimal 2
00:08:21.156     10:46:10	-- scripts/common.sh@352 -- # local d=2
00:08:21.156     10:46:10	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:21.156     10:46:10	-- scripts/common.sh@354 -- # echo 2
00:08:21.156    10:46:10	-- scripts/common.sh@365 -- # ver2[v]=2
00:08:21.156    10:46:10	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:08:21.156    10:46:10	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:08:21.156    10:46:10	-- scripts/common.sh@367 -- # return 0
00:08:21.156    10:46:10	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:21.156    10:46:10	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:08:21.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:21.156  		--rc genhtml_branch_coverage=1
00:08:21.156  		--rc genhtml_function_coverage=1
00:08:21.156  		--rc genhtml_legend=1
00:08:21.156  		--rc geninfo_all_blocks=1
00:08:21.156  		--rc geninfo_unexecuted_blocks=1
00:08:21.156  		
00:08:21.156  		'
00:08:21.156    10:46:10	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:08:21.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:21.156  		--rc genhtml_branch_coverage=1
00:08:21.156  		--rc genhtml_function_coverage=1
00:08:21.156  		--rc genhtml_legend=1
00:08:21.156  		--rc geninfo_all_blocks=1
00:08:21.156  		--rc geninfo_unexecuted_blocks=1
00:08:21.156  		
00:08:21.156  		'
00:08:21.156    10:46:10	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:08:21.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:21.156  		--rc genhtml_branch_coverage=1
00:08:21.156  		--rc genhtml_function_coverage=1
00:08:21.156  		--rc genhtml_legend=1
00:08:21.156  		--rc geninfo_all_blocks=1
00:08:21.156  		--rc geninfo_unexecuted_blocks=1
00:08:21.156  		
00:08:21.156  		'
00:08:21.156    10:46:10	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:08:21.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:21.156  		--rc genhtml_branch_coverage=1
00:08:21.156  		--rc genhtml_function_coverage=1
00:08:21.156  		--rc genhtml_legend=1
00:08:21.156  		--rc geninfo_all_blocks=1
00:08:21.156  		--rc geninfo_unexecuted_blocks=1
00:08:21.156  		
00:08:21.156  		'
00:08:21.156   10:46:10	-- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:08:21.156    10:46:10	-- bdev/nbd_common.sh@6 -- # set -e
00:08:21.156   10:46:10	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:08:21.156   10:46:10	-- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:08:21.156   10:46:10	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json
00:08:21.156   10:46:10	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json
00:08:21.156   10:46:10	-- bdev/blockdev.sh@18 -- # :
00:08:21.156   10:46:10	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:08:21.156   10:46:10	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:08:21.156   10:46:10	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:08:21.156    10:46:10	-- bdev/blockdev.sh@672 -- # uname -s
00:08:21.156   10:46:10	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:08:21.156   10:46:10	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:08:21.156   10:46:10	-- bdev/blockdev.sh@680 -- # test_type=nvme
00:08:21.156   10:46:10	-- bdev/blockdev.sh@681 -- # crypto_device=
00:08:21.156   10:46:10	-- bdev/blockdev.sh@682 -- # dek=
00:08:21.156   10:46:10	-- bdev/blockdev.sh@683 -- # env_ctx=
00:08:21.156   10:46:10	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:08:21.156   10:46:10	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:08:21.156   10:46:10	-- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]]
00:08:21.156   10:46:10	-- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]]
00:08:21.156   10:46:10	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:08:21.156   10:46:10	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2100198
00:08:21.156   10:46:10	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:08:21.156   10:46:10	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:08:21.156   10:46:10	-- bdev/blockdev.sh@47 -- # waitforlisten 2100198
00:08:21.156   10:46:10	-- common/autotest_common.sh@829 -- # '[' -z 2100198 ']'
00:08:21.156   10:46:10	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:21.156   10:46:10	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:21.156   10:46:10	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:21.156  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:21.156   10:46:10	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:21.156   10:46:10	-- common/autotest_common.sh@10 -- # set +x
00:08:21.156  [2024-12-15 10:46:10.116537] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:21.156  [2024-12-15 10:46:10.116623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2100198 ]
00:08:21.156  EAL: No free 2048 kB hugepages reported on node 1
00:08:21.414  [2024-12-15 10:46:10.215404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:21.414  [2024-12-15 10:46:10.315960] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:08:21.414  [2024-12-15 10:46:10.316117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:21.673  [2024-12-15 10:46:10.504991] 'OCF_Core' volume operations registered
00:08:21.673  [2024-12-15 10:46:10.508190] 'OCF_Cache' volume operations registered
00:08:21.673  [2024-12-15 10:46:10.511893] 'OCF Composite' volume operations registered
00:08:21.673  [2024-12-15 10:46:10.515190] 'SPDK_block_device' volume operations registered
00:08:22.242   10:46:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:22.242   10:46:10	-- common/autotest_common.sh@862 -- # return 0
00:08:22.242   10:46:11	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:08:22.242   10:46:11	-- bdev/blockdev.sh@697 -- # setup_nvme_conf
00:08:22.242   10:46:11	-- bdev/blockdev.sh@79 -- # local json
00:08:22.242   10:46:11	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:08:22.242    10:46:11	-- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:22.242   10:46:11	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\'''
00:08:22.242   10:46:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:22.242   10:46:11	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533   10:46:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533   10:46:13	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:08:25.533   10:46:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.533   10:46:13	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533   10:46:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533   10:46:13	-- bdev/blockdev.sh@738 -- # cat
00:08:25.533    10:46:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:08:25.533    10:46:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.533    10:46:13	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533    10:46:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533    10:46:13	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:08:25.533    10:46:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.533    10:46:13	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533    10:46:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533    10:46:14	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:08:25.533    10:46:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.533    10:46:14	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533    10:46:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533   10:46:14	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:08:25.533    10:46:14	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:08:25.533    10:46:14	-- common/autotest_common.sh@561 -- # xtrace_disable
00:08:25.533    10:46:14	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:08:25.533    10:46:14	-- common/autotest_common.sh@10 -- # set +x
00:08:25.533    10:46:14	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:08:25.533   10:46:14	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:08:25.533    10:46:14	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "acb3b546-c680-4d95-a621-55f9affd2e68"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 512,' '  "num_blocks": 7814037168,' '  "uuid": "acb3b546-c680-4d95-a621-55f9affd2e68",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": true,' '    "nvme_io": true' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:5e:00.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:5e:00.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x8086",' '          "model_number": "INTEL SSDPE2KX040T8",' '          "serial_number": "BTLJ83030AK84P0DGN",' '          "firmware_revision": "VDV10184",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 1,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.2"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:08:25.533    10:46:14	-- bdev/blockdev.sh@747 -- # jq -r .name
00:08:25.533   10:46:14	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:08:25.533   10:46:14	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1
00:08:25.533   10:46:14	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:08:25.533   10:46:14	-- bdev/blockdev.sh@752 -- # killprocess 2100198
00:08:25.533   10:46:14	-- common/autotest_common.sh@936 -- # '[' -z 2100198 ']'
00:08:25.534   10:46:14	-- common/autotest_common.sh@940 -- # kill -0 2100198
00:08:25.534    10:46:14	-- common/autotest_common.sh@941 -- # uname
00:08:25.534   10:46:14	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:25.534    10:46:14	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2100198
00:08:25.534   10:46:14	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:25.534   10:46:14	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:25.534   10:46:14	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2100198'
00:08:25.534  killing process with pid 2100198
00:08:25.534   10:46:14	-- common/autotest_common.sh@955 -- # kill 2100198
00:08:25.534   10:46:14	-- common/autotest_common.sh@960 -- # wait 2100198
00:08:29.729   10:46:18	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:08:29.729   10:46:18	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:29.729   10:46:18	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:08:29.729   10:46:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:29.729   10:46:18	-- common/autotest_common.sh@10 -- # set +x
00:08:29.729  ************************************
00:08:29.729  START TEST bdev_hello_world
00:08:29.729  ************************************
00:08:29.729   10:46:18	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:08:29.729  [2024-12-15 10:46:18.453259] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:29.729  [2024-12-15 10:46:18.453317] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2101434 ]
00:08:29.729  EAL: No free 2048 kB hugepages reported on node 1
00:08:29.729  [2024-12-15 10:46:18.542451] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:29.729  [2024-12-15 10:46:18.637646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:29.988  [2024-12-15 10:46:18.884581] 'OCF_Core' volume operations registered
00:08:29.988  [2024-12-15 10:46:18.888076] 'OCF_Cache' volume operations registered
00:08:29.988  [2024-12-15 10:46:18.892012] 'OCF Composite' volume operations registered
00:08:29.988  [2024-12-15 10:46:18.895509] 'SPDK_block_device' volume operations registered
00:08:33.278  [2024-12-15 10:46:21.756306] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:08:33.278  [2024-12-15 10:46:21.756346] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:08:33.278  [2024-12-15 10:46:21.756366] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:08:33.278  [2024-12-15 10:46:21.758513] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:08:33.278  [2024-12-15 10:46:21.758701] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:08:33.278  [2024-12-15 10:46:21.758721] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:08:33.278  [2024-12-15 10:46:21.760342] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:08:33.278  
00:08:33.278  [2024-12-15 10:46:21.760364] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:08:37.470  
00:08:37.470  real	0m7.408s
00:08:37.470  user	0m6.320s
00:08:37.470  sys	0m0.350s
00:08:37.470   10:46:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:37.470   10:46:25	-- common/autotest_common.sh@10 -- # set +x
00:08:37.470  ************************************
00:08:37.470  END TEST bdev_hello_world
00:08:37.470  ************************************
00:08:37.470   10:46:25	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:08:37.470   10:46:25	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:08:37.470   10:46:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:37.470   10:46:25	-- common/autotest_common.sh@10 -- # set +x
00:08:37.470  ************************************
00:08:37.470  START TEST bdev_bounds
00:08:37.470  ************************************
00:08:37.470   10:46:25	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:08:37.470   10:46:25	-- bdev/blockdev.sh@288 -- # bdevio_pid=2102398
00:08:37.470   10:46:25	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:08:37.470   10:46:25	-- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:08:37.470   10:46:25	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 2102398'
00:08:37.470  Process bdevio pid: 2102398
00:08:37.470   10:46:25	-- bdev/blockdev.sh@291 -- # waitforlisten 2102398
00:08:37.470   10:46:25	-- common/autotest_common.sh@829 -- # '[' -z 2102398 ']'
00:08:37.470   10:46:25	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:37.470   10:46:25	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:37.470   10:46:25	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:37.470  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:37.470   10:46:25	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:37.470   10:46:25	-- common/autotest_common.sh@10 -- # set +x
00:08:37.470  [2024-12-15 10:46:25.917896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:37.470  [2024-12-15 10:46:25.917976] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2102398 ]
00:08:37.470  EAL: No free 2048 kB hugepages reported on node 1
00:08:37.470  [2024-12-15 10:46:26.025833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:08:37.470  [2024-12-15 10:46:26.125660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:37.470  [2024-12-15 10:46:26.125680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:08:37.470  [2024-12-15 10:46:26.125684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:37.470  [2024-12-15 10:46:26.377037] 'OCF_Core' volume operations registered
00:08:37.470  [2024-12-15 10:46:26.380509] 'OCF_Cache' volume operations registered
00:08:37.470  [2024-12-15 10:46:26.384487] 'OCF Composite' volume operations registered
00:08:37.470  [2024-12-15 10:46:26.387987] 'SPDK_block_device' volume operations registered
00:08:40.762   10:46:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:40.762   10:46:29	-- common/autotest_common.sh@862 -- # return 0
00:08:40.762   10:46:29	-- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests
00:08:40.762  I/O targets:
00:08:40.762    Nvme0n1: 7814037168 blocks of 512 bytes (3815448 MiB)
00:08:40.762  
00:08:40.762  
00:08:40.762       CUnit - A unit testing framework for C - Version 2.1-3
00:08:40.762       http://cunit.sourceforge.net/
00:08:40.762  
00:08:40.762  
00:08:40.762  Suite: bdevio tests on: Nvme0n1
00:08:40.762    Test: blockdev write read block ...passed
00:08:40.762    Test: blockdev write zeroes read block ...passed
00:08:40.762    Test: blockdev write zeroes read no split ...passed
00:08:40.762    Test: blockdev write zeroes read split ...passed
00:08:40.762    Test: blockdev write zeroes read split partial ...passed
00:08:40.762    Test: blockdev reset ...[2024-12-15 10:46:29.630441] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:08:40.762  [2024-12-15 10:46:29.632996] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:08:40.762  passed
00:08:40.762    Test: blockdev write read 8 blocks ...passed
00:08:40.762    Test: blockdev write read size > 128k ...passed
00:08:40.762    Test: blockdev write read invalid size ...passed
00:08:40.762    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:08:40.762    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:08:40.762    Test: blockdev write read max offset ...passed
00:08:40.762    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:08:40.762    Test: blockdev writev readv 8 blocks ...passed
00:08:40.762    Test: blockdev writev readv 30 x 1block ...passed
00:08:40.762    Test: blockdev writev readv block ...passed
00:08:40.762    Test: blockdev writev readv size > 128k ...passed
00:08:40.762    Test: blockdev writev readv size > 128k in two iovs ...passed
00:08:40.762    Test: blockdev comparev and writev ...passed
00:08:40.762    Test: blockdev nvme passthru rw ...passed
00:08:40.762    Test: blockdev nvme passthru vendor specific ...[2024-12-15 10:46:29.660435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:894 PRP1 0x0 PRP2 0x0
00:08:40.762  [2024-12-15 10:46:29.660465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:894 cdw0:0 sqhd:0056 p:1 m:0 dnr:1
00:08:40.762  passed
00:08:40.762    Test: blockdev nvme admin passthru ...passed
00:08:40.762    Test: blockdev copy ...passed
00:08:40.762  
00:08:40.762  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:40.762                suites      1      1    n/a      0        0
00:08:40.762                 tests     23     23     23      0        0
00:08:40.762               asserts    140    140    140      0      n/a
00:08:40.762  
00:08:40.762  Elapsed time =    0.130 seconds
00:08:40.762  0
00:08:40.762   10:46:29	-- bdev/blockdev.sh@293 -- # killprocess 2102398
00:08:40.762   10:46:29	-- common/autotest_common.sh@936 -- # '[' -z 2102398 ']'
00:08:40.762   10:46:29	-- common/autotest_common.sh@940 -- # kill -0 2102398
00:08:40.762    10:46:29	-- common/autotest_common.sh@941 -- # uname
00:08:40.762   10:46:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:40.762    10:46:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2102398
00:08:40.762   10:46:29	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:40.762   10:46:29	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:40.762   10:46:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2102398'
00:08:40.762  killing process with pid 2102398
00:08:40.762   10:46:29	-- common/autotest_common.sh@955 -- # kill 2102398
00:08:40.762   10:46:29	-- common/autotest_common.sh@960 -- # wait 2102398
00:08:45.130   10:46:33	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:08:45.130  
00:08:45.130  real	0m7.916s
00:08:45.130  user	0m22.772s
00:08:45.130  sys	0m0.613s
00:08:45.130   10:46:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:45.130   10:46:33	-- common/autotest_common.sh@10 -- # set +x
00:08:45.130  ************************************
00:08:45.130  END TEST bdev_bounds
00:08:45.130  ************************************
00:08:45.130   10:46:33	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 ''
00:08:45.130   10:46:33	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:08:45.130   10:46:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:45.130   10:46:33	-- common/autotest_common.sh@10 -- # set +x
00:08:45.130  ************************************
00:08:45.130  START TEST bdev_nbd
00:08:45.130  ************************************
00:08:45.130   10:46:33	-- common/autotest_common.sh@1114 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json Nvme0n1 ''
00:08:45.130    10:46:33	-- bdev/blockdev.sh@298 -- # uname -s
00:08:45.130   10:46:33	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:08:45.130   10:46:33	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:45.130   10:46:33	-- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:08:45.130   10:46:33	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1')
00:08:45.130   10:46:33	-- bdev/blockdev.sh@302 -- # local bdev_all
00:08:45.130   10:46:33	-- bdev/blockdev.sh@303 -- # local bdev_num=1
00:08:45.130   10:46:33	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:08:45.130   10:46:33	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:08:45.130   10:46:33	-- bdev/blockdev.sh@309 -- # local nbd_all
00:08:45.130   10:46:33	-- bdev/blockdev.sh@310 -- # bdev_num=1
00:08:45.130   10:46:33	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0')
00:08:45.130   10:46:33	-- bdev/blockdev.sh@312 -- # local nbd_list
00:08:45.130   10:46:33	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1')
00:08:45.130   10:46:33	-- bdev/blockdev.sh@313 -- # local bdev_list
00:08:45.130   10:46:33	-- bdev/blockdev.sh@316 -- # nbd_pid=2103512
00:08:45.130   10:46:33	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:08:45.130   10:46:33	-- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:08:45.130   10:46:33	-- bdev/blockdev.sh@318 -- # waitforlisten 2103512 /var/tmp/spdk-nbd.sock
00:08:45.130   10:46:33	-- common/autotest_common.sh@829 -- # '[' -z 2103512 ']'
00:08:45.130   10:46:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:08:45.130   10:46:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:08:45.130   10:46:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:08:45.130  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:08:45.130   10:46:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:08:45.130   10:46:33	-- common/autotest_common.sh@10 -- # set +x
00:08:45.130  [2024-12-15 10:46:33.893204] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:45.130  [2024-12-15 10:46:33.893271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:08:45.130  EAL: No free 2048 kB hugepages reported on node 1
00:08:45.130  [2024-12-15 10:46:33.985312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:45.130  [2024-12-15 10:46:34.084720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:45.389  [2024-12-15 10:46:34.328482] 'OCF_Core' volume operations registered
00:08:45.389  [2024-12-15 10:46:34.331739] 'OCF_Cache' volume operations registered
00:08:45.389  [2024-12-15 10:46:34.335356] 'OCF Composite' volume operations registered
00:08:45.389  [2024-12-15 10:46:34.338609] 'SPDK_block_device' volume operations registered
00:08:48.687   10:46:37	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:08:48.687   10:46:37	-- common/autotest_common.sh@862 -- # return 0
00:08:48.687   10:46:37	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@24 -- # local i
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:08:48.687    10:46:37	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:08:48.687    10:46:37	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:08:48.687   10:46:37	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:08:48.687   10:46:37	-- common/autotest_common.sh@867 -- # local i
00:08:48.687   10:46:37	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:48.687   10:46:37	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:48.687   10:46:37	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:08:48.687   10:46:37	-- common/autotest_common.sh@871 -- # break
00:08:48.687   10:46:37	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:48.687   10:46:37	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:48.687   10:46:37	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:48.687  1+0 records in
00:08:48.687  1+0 records out
00:08:48.687  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029809 s, 13.7 MB/s
00:08:48.687    10:46:37	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:08:48.687   10:46:37	-- common/autotest_common.sh@884 -- # size=4096
00:08:48.687   10:46:37	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:08:48.687   10:46:37	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:48.687   10:46:37	-- common/autotest_common.sh@887 -- # return 0
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:08:48.687   10:46:37	-- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:08:48.687    10:46:37	-- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:08:48.946    {
00:08:48.946      "nbd_device": "/dev/nbd0",
00:08:48.946      "bdev_name": "Nvme0n1"
00:08:48.946    }
00:08:48.946  ]'
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:08:48.946    10:46:37	-- bdev/nbd_common.sh@119 -- # echo '[
00:08:48.946    {
00:08:48.946      "nbd_device": "/dev/nbd0",
00:08:48.946      "bdev_name": "Nvme0n1"
00:08:48.946    }
00:08:48.946  ]'
00:08:48.946    10:46:37	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@51 -- # local i
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:48.946   10:46:37	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:49.205    10:46:38	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@41 -- # break
00:08:49.205   10:46:38	-- bdev/nbd_common.sh@45 -- # return 0
00:08:49.205    10:46:38	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:49.205    10:46:38	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:49.205     10:46:38	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:49.464    10:46:38	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:49.464     10:46:38	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:49.464     10:46:38	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:49.464    10:46:38	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:49.464     10:46:38	-- bdev/nbd_common.sh@65 -- # echo ''
00:08:49.464     10:46:38	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:49.464     10:46:38	-- bdev/nbd_common.sh@65 -- # true
00:08:49.464    10:46:38	-- bdev/nbd_common.sh@65 -- # count=0
00:08:49.464    10:46:38	-- bdev/nbd_common.sh@66 -- # echo 0
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@122 -- # count=0
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@127 -- # return 0
00:08:49.464   10:46:38	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@12 -- # local i
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:49.464   10:46:38	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:08:49.724  /dev/nbd0
00:08:49.724    10:46:38	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:08:49.724   10:46:38	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:08:49.724   10:46:38	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:08:49.724   10:46:38	-- common/autotest_common.sh@867 -- # local i
00:08:49.724   10:46:38	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:08:49.724   10:46:38	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:08:49.724   10:46:38	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:08:49.724   10:46:38	-- common/autotest_common.sh@871 -- # break
00:08:49.724   10:46:38	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:08:49.724   10:46:38	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:08:49.724   10:46:38	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:08:49.724  1+0 records in
00:08:49.724  1+0 records out
00:08:49.724  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277871 s, 14.7 MB/s
00:08:49.724    10:46:38	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:08:49.724   10:46:38	-- common/autotest_common.sh@884 -- # size=4096
00:08:49.724   10:46:38	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:08:49.724   10:46:38	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:08:49.724   10:46:38	-- common/autotest_common.sh@887 -- # return 0
00:08:49.724   10:46:38	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:08:49.724   10:46:38	-- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:08:49.724    10:46:38	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:49.724    10:46:38	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:49.724     10:46:38	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:49.983    10:46:38	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:08:49.983    {
00:08:49.983      "nbd_device": "/dev/nbd0",
00:08:49.983      "bdev_name": "Nvme0n1"
00:08:49.983    }
00:08:49.983  ]'
00:08:49.983     10:46:38	-- bdev/nbd_common.sh@64 -- # echo '[
00:08:49.983    {
00:08:49.983      "nbd_device": "/dev/nbd0",
00:08:49.983      "bdev_name": "Nvme0n1"
00:08:49.983    }
00:08:49.983  ]'
00:08:49.983     10:46:38	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:50.242    10:46:39	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:08:50.242     10:46:39	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:50.242     10:46:39	-- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:08:50.242    10:46:39	-- bdev/nbd_common.sh@65 -- # count=1
00:08:50.242    10:46:39	-- bdev/nbd_common.sh@66 -- # echo 1
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@95 -- # count=1
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@71 -- # local operation=write
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:08:50.242  256+0 records in
00:08:50.242  256+0 records out
00:08:50.242  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010744 s, 97.6 MB/s
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:08:50.242   10:46:39	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:08:50.242  256+0 records in
00:08:50.242  256+0 records out
00:08:50.242  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204869 s, 51.2 MB/s
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@51 -- # local i
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:50.243   10:46:39	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:50.502    10:46:39	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@41 -- # break
00:08:50.502   10:46:39	-- bdev/nbd_common.sh@45 -- # return 0
00:08:50.502    10:46:39	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:08:50.502    10:46:39	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:50.502     10:46:39	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:08:50.762    10:46:39	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:08:50.762     10:46:39	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:08:50.762     10:46:39	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:08:50.762    10:46:39	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:08:50.762     10:46:39	-- bdev/nbd_common.sh@65 -- # echo ''
00:08:50.762     10:46:39	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:08:50.762     10:46:39	-- bdev/nbd_common.sh@65 -- # true
00:08:50.762    10:46:39	-- bdev/nbd_common.sh@65 -- # count=0
00:08:50.762    10:46:39	-- bdev/nbd_common.sh@66 -- # echo 0
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@104 -- # count=0
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@109 -- # return 0
00:08:50.762   10:46:39	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0')
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:08:50.762   10:46:39	-- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:08:50.762  malloc_lvol_verify
00:08:51.021   10:46:39	-- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:08:51.021  afad04a0-baca-4ce8-9d37-10a2f8fa3e5a
00:08:51.280   10:46:40	-- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:08:51.280  42ed6c38-1aca-47aa-9ca1-f985cb298331
00:08:51.280   10:46:40	-- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:08:51.539  /dev/nbd0
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:08:51.539  mke2fs 1.47.0 (5-Feb-2023)
00:08:51.539  Discarding device blocks:    0/4096         done                            
00:08:51.539  Creating filesystem with 4096 1k blocks and 1024 inodes
00:08:51.539  
00:08:51.539  Allocating group tables: 0/1   done                            
00:08:51.539  Writing inode tables: 0/1   done                            
00:08:51.539  Creating journal (1024 blocks): done
00:08:51.539  Writing superblocks and filesystem accounting information: 0/1   done
00:08:51.539  
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@51 -- # local i
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:08:51.539   10:46:40	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:08:51.799    10:46:40	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@41 -- # break
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@45 -- # return 0
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:08:51.799   10:46:40	-- bdev/nbd_common.sh@147 -- # return 0
00:08:51.799   10:46:40	-- bdev/blockdev.sh@324 -- # killprocess 2103512
00:08:51.799   10:46:40	-- common/autotest_common.sh@936 -- # '[' -z 2103512 ']'
00:08:51.799   10:46:40	-- common/autotest_common.sh@940 -- # kill -0 2103512
00:08:51.799    10:46:40	-- common/autotest_common.sh@941 -- # uname
00:08:51.799   10:46:40	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:08:51.799    10:46:40	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2103512
00:08:51.799   10:46:40	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:08:51.799   10:46:40	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:08:51.799   10:46:40	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2103512'
00:08:51.799  killing process with pid 2103512
00:08:51.799   10:46:40	-- common/autotest_common.sh@955 -- # kill 2103512
00:08:51.799   10:46:40	-- common/autotest_common.sh@960 -- # wait 2103512
00:08:55.994   10:46:44	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:08:55.994  
00:08:55.994  real	0m10.922s
00:08:55.994  user	0m12.595s
00:08:55.994  sys	0m1.740s
00:08:55.994   10:46:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:08:55.994   10:46:44	-- common/autotest_common.sh@10 -- # set +x
00:08:55.994  ************************************
00:08:55.994  END TEST bdev_nbd
00:08:55.994  ************************************
00:08:55.994   10:46:44	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:08:55.994   10:46:44	-- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']'
00:08:55.994   10:46:44	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:08:55.994  skipping fio tests on NVMe due to multi-ns failures.
00:08:55.994   10:46:44	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:08:55.994   10:46:44	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:08:55.995   10:46:44	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:08:55.995   10:46:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:08:55.995   10:46:44	-- common/autotest_common.sh@10 -- # set +x
00:08:55.995  ************************************
00:08:55.995  START TEST bdev_verify
00:08:55.995  ************************************
00:08:55.995   10:46:44	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:08:55.995  [2024-12-15 10:46:44.855783] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:08:55.995  [2024-12-15 10:46:44.855853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105056 ]
00:08:55.995  EAL: No free 2048 kB hugepages reported on node 1
00:08:55.995  [2024-12-15 10:46:44.962280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:08:56.260  [2024-12-15 10:46:45.059276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:08:56.260  [2024-12-15 10:46:45.059282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:08:56.520  [2024-12-15 10:46:45.282682] 'OCF_Core' volume operations registered
00:08:56.520  [2024-12-15 10:46:45.285929] 'OCF_Cache' volume operations registered
00:08:56.520  [2024-12-15 10:46:45.289592] 'OCF Composite' volume operations registered
00:08:56.520  [2024-12-15 10:46:45.292883] 'SPDK_block_device' volume operations registered
00:08:59.809  Running I/O for 5 seconds...
00:09:05.087  
00:09:05.087                                                                                                  Latency(us)
00:09:05.087  
[2024-12-15T09:46:54.103Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:05.087  
[2024-12-15T09:46:54.103Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:09:05.087  	 Verification LBA range: start 0x0 length 0x1d1c0beb
00:09:05.087  	 Nvme0n1             :       5.01   17639.07      68.90       0.00     0.00    7220.20     122.88   11283.59
00:09:05.087  
[2024-12-15T09:46:54.103Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:09:05.087  	 Verification LBA range: start 0x1d1c0beb length 0x1d1c0beb
00:09:05.087  	 Nvme0n1             :       5.01   17706.00      69.16       0.00     0.00    7193.04     146.92   10713.71
00:09:05.087  
[2024-12-15T09:46:54.103Z]  ===================================================================================================================
00:09:05.087  
[2024-12-15T09:46:54.103Z]  Total                       :              35345.07     138.07       0.00     0.00    7206.59     122.88   11283.59
00:09:08.378  
00:09:08.378  real	0m12.524s
00:09:08.378  user	0m23.489s
00:09:08.378  sys	0m0.379s
00:09:08.378   10:46:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:08.378   10:46:57	-- common/autotest_common.sh@10 -- # set +x
00:09:08.378  ************************************
00:09:08.378  END TEST bdev_verify
00:09:08.378  ************************************
00:09:08.378   10:46:57	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:08.378   10:46:57	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:09:08.378   10:46:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:08.378   10:46:57	-- common/autotest_common.sh@10 -- # set +x
00:09:08.378  ************************************
00:09:08.378  START TEST bdev_verify_big_io
00:09:08.378  ************************************
00:09:08.378   10:46:57	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:09:08.637  [2024-12-15 10:46:57.422084] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:08.637  [2024-12-15 10:46:57.422150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106851 ]
00:09:08.637  EAL: No free 2048 kB hugepages reported on node 1
00:09:08.637  [2024-12-15 10:46:57.526047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:08.637  [2024-12-15 10:46:57.620694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:08.637  [2024-12-15 10:46:57.620700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:08.897  [2024-12-15 10:46:57.873097] 'OCF_Core' volume operations registered
00:09:08.897  [2024-12-15 10:46:57.876599] 'OCF_Cache' volume operations registered
00:09:08.897  [2024-12-15 10:46:57.880536] 'OCF Composite' volume operations registered
00:09:08.897  [2024-12-15 10:46:57.884049] 'SPDK_block_device' volume operations registered
00:09:12.189  Running I/O for 5 seconds...
00:09:17.465  
00:09:17.465                                                                                                  Latency(us)
00:09:17.465  
[2024-12-15T09:47:06.481Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:17.465  
[2024-12-15T09:47:06.481Z]  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:09:17.465  	 Verification LBA range: start 0x0 length 0x1d1c0be
00:09:17.465  	 Nvme0n1             :       5.05    1372.11      85.76       0.00     0.00   91742.83    1745.25  142241.61
00:09:17.465  
[2024-12-15T09:47:06.481Z]  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:09:17.465  	 Verification LBA range: start 0x1d1c0be length 0x1d1c0be
00:09:17.465  	 Nvme0n1             :       5.05    1405.56      87.85       0.00     0.00   89537.05     762.21  119446.48
00:09:17.465  
[2024-12-15T09:47:06.481Z]  ===================================================================================================================
00:09:17.465  
[2024-12-15T09:47:06.481Z]  Total                       :               2777.67     173.60       0.00     0.00   90626.18     762.21  142241.61
00:09:21.658  
00:09:21.658  real	0m12.513s
00:09:21.658  user	0m23.467s
00:09:21.658  sys	0m0.376s
00:09:21.658   10:47:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:21.658   10:47:09	-- common/autotest_common.sh@10 -- # set +x
00:09:21.658  ************************************
00:09:21.658  END TEST bdev_verify_big_io
00:09:21.659  ************************************
00:09:21.659   10:47:09	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:21.659   10:47:09	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:21.659   10:47:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:21.659   10:47:09	-- common/autotest_common.sh@10 -- # set +x
00:09:21.659  ************************************
00:09:21.659  START TEST bdev_write_zeroes
00:09:21.659  ************************************
00:09:21.659   10:47:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:21.659  [2024-12-15 10:47:09.980418] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:21.659  [2024-12-15 10:47:09.980488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108482 ]
00:09:21.659  EAL: No free 2048 kB hugepages reported on node 1
00:09:21.659  [2024-12-15 10:47:10.087055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:21.659  [2024-12-15 10:47:10.180997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:21.659  [2024-12-15 10:47:10.432731] 'OCF_Core' volume operations registered
00:09:21.659  [2024-12-15 10:47:10.436216] 'OCF_Cache' volume operations registered
00:09:21.659  [2024-12-15 10:47:10.440200] 'OCF Composite' volume operations registered
00:09:21.659  [2024-12-15 10:47:10.443724] 'SPDK_block_device' volume operations registered
00:09:24.948  Running I/O for 1 seconds...
00:09:25.516  
00:09:25.516                                                                                                  Latency(us)
00:09:25.516  
[2024-12-15T09:47:14.532Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:09:25.516  
[2024-12-15T09:47:14.532Z]  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:09:25.516  	 Nvme0n1             :       1.00   62161.07     242.82       0.00     0.00    2052.31     730.16    2835.14
00:09:25.516  
[2024-12-15T09:47:14.532Z]  ===================================================================================================================
00:09:25.516  
[2024-12-15T09:47:14.532Z]  Total                       :              62161.07     242.82       0.00     0.00    2052.31     730.16    2835.14
00:09:29.709  
00:09:29.709  real	0m8.441s
00:09:29.709  user	0m7.317s
00:09:29.709  sys	0m0.373s
00:09:29.709   10:47:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:29.709   10:47:18	-- common/autotest_common.sh@10 -- # set +x
00:09:29.709  ************************************
00:09:29.709  END TEST bdev_write_zeroes
00:09:29.709  ************************************
00:09:29.709   10:47:18	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:29.709   10:47:18	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:29.709   10:47:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:29.709   10:47:18	-- common/autotest_common.sh@10 -- # set +x
00:09:29.709  ************************************
00:09:29.709  START TEST bdev_json_nonenclosed
00:09:29.709  ************************************
00:09:29.709   10:47:18	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:29.709  [2024-12-15 10:47:18.465247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:29.709  [2024-12-15 10:47:18.465315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109577 ]
00:09:29.709  EAL: No free 2048 kB hugepages reported on node 1
00:09:29.709  [2024-12-15 10:47:18.570541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:29.710  [2024-12-15 10:47:18.665358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:29.710  [2024-12-15 10:47:18.665480] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:09:29.710  [2024-12-15 10:47:18.665504] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:29.970  
00:09:29.970  real	0m0.363s
00:09:29.970  user	0m0.237s
00:09:29.970  sys	0m0.124s
00:09:29.970   10:47:18	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:29.970   10:47:18	-- common/autotest_common.sh@10 -- # set +x
00:09:29.970  ************************************
00:09:29.970  END TEST bdev_json_nonenclosed
00:09:29.970  ************************************
00:09:29.970   10:47:18	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:29.970   10:47:18	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:09:29.970   10:47:18	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:29.970   10:47:18	-- common/autotest_common.sh@10 -- # set +x
00:09:29.970  ************************************
00:09:29.970  START TEST bdev_json_nonarray
00:09:29.970  ************************************
00:09:29.970   10:47:18	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:09:29.970  [2024-12-15 10:47:18.883511] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:29.970  [2024-12-15 10:47:18.883579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109662 ]
00:09:29.970  EAL: No free 2048 kB hugepages reported on node 1
00:09:30.229  [2024-12-15 10:47:18.988406] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:30.229  [2024-12-15 10:47:19.085376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:30.229  [2024-12-15 10:47:19.085506] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:09:30.229  [2024-12-15 10:47:19.085530] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:30.229  
00:09:30.229  real	0m0.368s
00:09:30.229  user	0m0.237s
00:09:30.229  sys	0m0.129s
00:09:30.229   10:47:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:30.229   10:47:19	-- common/autotest_common.sh@10 -- # set +x
00:09:30.229  ************************************
00:09:30.229  END TEST bdev_json_nonarray
00:09:30.229  ************************************
00:09:30.229   10:47:19	-- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]]
00:09:30.229   10:47:19	-- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]]
00:09:30.488   10:47:19	-- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]]
00:09:30.488   10:47:19	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:09:30.488   10:47:19	-- bdev/blockdev.sh@809 -- # cleanup
00:09:30.488   10:47:19	-- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile
00:09:30.488   10:47:19	-- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:09:30.488   10:47:19	-- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]]
00:09:30.488   10:47:19	-- bdev/blockdev.sh@28 -- # [[ nvme == daos ]]
00:09:30.488   10:47:19	-- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]]
00:09:30.488   10:47:19	-- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]]
00:09:30.488  
00:09:30.488  real	1m9.398s
00:09:30.488  user	1m44.363s
00:09:30.488  sys	0m5.174s
00:09:30.488   10:47:19	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:30.488   10:47:19	-- common/autotest_common.sh@10 -- # set +x
00:09:30.488  ************************************
00:09:30.488  END TEST blockdev_nvme
00:09:30.488  ************************************
00:09:30.488    10:47:19	-- spdk/autotest.sh@206 -- # uname -s
00:09:30.488   10:47:19	-- spdk/autotest.sh@206 -- # [[ Linux == Linux ]]
00:09:30.488   10:47:19	-- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt
00:09:30.488   10:47:19	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:09:30.488   10:47:19	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:30.488   10:47:19	-- common/autotest_common.sh@10 -- # set +x
00:09:30.488  ************************************
00:09:30.488  START TEST blockdev_nvme_gpt
00:09:30.488  ************************************
00:09:30.488   10:47:19	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/blockdev.sh gpt
00:09:30.488  * Looking for test storage...
00:09:30.488  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev
00:09:30.488    10:47:19	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:09:30.488     10:47:19	-- common/autotest_common.sh@1690 -- # lcov --version
00:09:30.488     10:47:19	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:09:30.488    10:47:19	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:09:30.488    10:47:19	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:09:30.488    10:47:19	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:09:30.488    10:47:19	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:09:30.488    10:47:19	-- scripts/common.sh@335 -- # IFS=.-:
00:09:30.488    10:47:19	-- scripts/common.sh@335 -- # read -ra ver1
00:09:30.488    10:47:19	-- scripts/common.sh@336 -- # IFS=.-:
00:09:30.488    10:47:19	-- scripts/common.sh@336 -- # read -ra ver2
00:09:30.488    10:47:19	-- scripts/common.sh@337 -- # local 'op=<'
00:09:30.488    10:47:19	-- scripts/common.sh@339 -- # ver1_l=2
00:09:30.488    10:47:19	-- scripts/common.sh@340 -- # ver2_l=1
00:09:30.488    10:47:19	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:09:30.488    10:47:19	-- scripts/common.sh@343 -- # case "$op" in
00:09:30.488    10:47:19	-- scripts/common.sh@344 -- # : 1
00:09:30.488    10:47:19	-- scripts/common.sh@363 -- # (( v = 0 ))
00:09:30.488    10:47:19	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:30.488     10:47:19	-- scripts/common.sh@364 -- # decimal 1
00:09:30.488     10:47:19	-- scripts/common.sh@352 -- # local d=1
00:09:30.488     10:47:19	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:30.488     10:47:19	-- scripts/common.sh@354 -- # echo 1
00:09:30.488    10:47:19	-- scripts/common.sh@364 -- # ver1[v]=1
00:09:30.488     10:47:19	-- scripts/common.sh@365 -- # decimal 2
00:09:30.488     10:47:19	-- scripts/common.sh@352 -- # local d=2
00:09:30.488     10:47:19	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:30.488     10:47:19	-- scripts/common.sh@354 -- # echo 2
00:09:30.489    10:47:19	-- scripts/common.sh@365 -- # ver2[v]=2
00:09:30.489    10:47:19	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:09:30.489    10:47:19	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:09:30.489    10:47:19	-- scripts/common.sh@367 -- # return 0
00:09:30.489    10:47:19	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:30.489    10:47:19	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:09:30.489  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.489  		--rc genhtml_branch_coverage=1
00:09:30.489  		--rc genhtml_function_coverage=1
00:09:30.489  		--rc genhtml_legend=1
00:09:30.489  		--rc geninfo_all_blocks=1
00:09:30.489  		--rc geninfo_unexecuted_blocks=1
00:09:30.489  		
00:09:30.489  		'
00:09:30.489    10:47:19	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:09:30.489  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.489  		--rc genhtml_branch_coverage=1
00:09:30.489  		--rc genhtml_function_coverage=1
00:09:30.489  		--rc genhtml_legend=1
00:09:30.489  		--rc geninfo_all_blocks=1
00:09:30.489  		--rc geninfo_unexecuted_blocks=1
00:09:30.489  		
00:09:30.489  		'
00:09:30.489    10:47:19	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:09:30.489  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.489  		--rc genhtml_branch_coverage=1
00:09:30.489  		--rc genhtml_function_coverage=1
00:09:30.489  		--rc genhtml_legend=1
00:09:30.489  		--rc geninfo_all_blocks=1
00:09:30.489  		--rc geninfo_unexecuted_blocks=1
00:09:30.489  		
00:09:30.489  		'
00:09:30.489    10:47:19	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:09:30.489  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.489  		--rc genhtml_branch_coverage=1
00:09:30.489  		--rc genhtml_function_coverage=1
00:09:30.489  		--rc genhtml_legend=1
00:09:30.489  		--rc geninfo_all_blocks=1
00:09:30.489  		--rc geninfo_unexecuted_blocks=1
00:09:30.489  		
00:09:30.489  		'
00:09:30.489   10:47:19	-- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbd_common.sh
00:09:30.489    10:47:19	-- bdev/nbd_common.sh@6 -- # set -e
00:09:30.489   10:47:19	-- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:09:30.489   10:47:19	-- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:09:30.489   10:47:19	-- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json
00:09:30.489   10:47:19	-- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json
00:09:30.489   10:47:19	-- bdev/blockdev.sh@18 -- # :
00:09:30.489   10:47:19	-- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0
00:09:30.489   10:47:19	-- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1
00:09:30.489   10:47:19	-- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5
00:09:30.489    10:47:19	-- bdev/blockdev.sh@672 -- # uname -s
00:09:30.747   10:47:19	-- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']'
00:09:30.747   10:47:19	-- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0
00:09:30.747   10:47:19	-- bdev/blockdev.sh@680 -- # test_type=gpt
00:09:30.747   10:47:19	-- bdev/blockdev.sh@681 -- # crypto_device=
00:09:30.747   10:47:19	-- bdev/blockdev.sh@682 -- # dek=
00:09:30.747   10:47:19	-- bdev/blockdev.sh@683 -- # env_ctx=
00:09:30.747   10:47:19	-- bdev/blockdev.sh@684 -- # wait_for_rpc=
00:09:30.747   10:47:19	-- bdev/blockdev.sh@685 -- # '[' -n '' ']'
00:09:30.747   10:47:19	-- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]]
00:09:30.747   10:47:19	-- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]]
00:09:30.748   10:47:19	-- bdev/blockdev.sh@691 -- # start_spdk_tgt
00:09:30.748   10:47:19	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2109840
00:09:30.748   10:47:19	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:09:30.748   10:47:19	-- bdev/blockdev.sh@47 -- # waitforlisten 2109840
00:09:30.748   10:47:19	-- common/autotest_common.sh@829 -- # '[' -z 2109840 ']'
00:09:30.748   10:47:19	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:09:30.748   10:47:19	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:30.748   10:47:19	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:30.748   10:47:19	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:30.748  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:30.748   10:47:19	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:30.748   10:47:19	-- common/autotest_common.sh@10 -- # set +x
00:09:30.748  [2024-12-15 10:47:19.564211] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:30.748  [2024-12-15 10:47:19.564283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2109840 ]
00:09:30.748  EAL: No free 2048 kB hugepages reported on node 1
00:09:30.748  [2024-12-15 10:47:19.669765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:31.006  [2024-12-15 10:47:19.770268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:09:31.006  [2024-12-15 10:47:19.770420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:31.006  [2024-12-15 10:47:19.949371] 'OCF_Core' volume operations registered
00:09:31.006  [2024-12-15 10:47:19.952633] 'OCF_Cache' volume operations registered
00:09:31.006  [2024-12-15 10:47:19.956277] 'OCF Composite' volume operations registered
00:09:31.006  [2024-12-15 10:47:19.959546] 'SPDK_block_device' volume operations registered
00:09:31.576   10:47:20	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:09:31.576   10:47:20	-- common/autotest_common.sh@862 -- # return 0
00:09:31.576   10:47:20	-- bdev/blockdev.sh@692 -- # case "$test_type" in
00:09:31.576   10:47:20	-- bdev/blockdev.sh@700 -- # setup_gpt_conf
00:09:31.576   10:47:20	-- bdev/blockdev.sh@102 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:09:34.864  Waiting for block devices as requested
00:09:34.864  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:09:34.864  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:09:34.864  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:09:34.864  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:09:34.864  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:09:34.864  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:09:35.122  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:09:35.122  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:09:35.122  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:09:35.381  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:09:35.381  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:09:35.381  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:09:35.640  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:09:35.640  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:09:35.640  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:09:35.900  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:09:35.900  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:09:35.900   10:47:24	-- bdev/blockdev.sh@103 -- # get_zoned_devs
00:09:35.900   10:47:24	-- common/autotest_common.sh@1664 -- # zoned_devs=()
00:09:35.900   10:47:24	-- common/autotest_common.sh@1664 -- # local -gA zoned_devs
00:09:35.900   10:47:24	-- common/autotest_common.sh@1665 -- # local nvme bdf
00:09:35.900   10:47:24	-- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme*
00:09:35.900   10:47:24	-- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1
00:09:35.900   10:47:24	-- common/autotest_common.sh@1657 -- # local device=nvme0n1
00:09:35.900   10:47:24	-- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:09:35.900   10:47:24	-- common/autotest_common.sh@1660 -- # [[ none != none ]]
00:09:35.900   10:47:24	-- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:5e:00.0/nvme/nvme0/nvme0n1')
00:09:35.900   10:47:24	-- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev
00:09:35.900   10:47:24	-- bdev/blockdev.sh@106 -- # gpt_nvme=
00:09:35.900   10:47:24	-- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}"
00:09:35.900   10:47:24	-- bdev/blockdev.sh@109 -- # [[ -z '' ]]
00:09:35.900   10:47:24	-- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1
00:09:35.900    10:47:24	-- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print
00:09:35.900   10:47:24	-- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:09:35.900  BYT;
00:09:35.900  /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:;'
00:09:35.900   10:47:24	-- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:09:35.900  BYT;
00:09:35.900  /dev/nvme0n1:4001GB:nvme:512:512:unknown:INTEL SSDPE2KX040T8:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:09:35.900   10:47:24	-- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1
00:09:35.900   10:47:24	-- bdev/blockdev.sh@114 -- # break
00:09:35.900   10:47:24	-- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]]
00:09:35.900   10:47:24	-- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:09:35.900   10:47:24	-- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:09:35.900   10:47:24	-- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:09:35.900    10:47:24	-- bdev/blockdev.sh@128 -- # get_spdk_gpt_old
00:09:35.900    10:47:24	-- scripts/common.sh@410 -- # local spdk_guid
00:09:35.900    10:47:24	-- scripts/common.sh@412 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]]
00:09:35.900    10:47:24	-- scripts/common.sh@414 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:09:35.900    10:47:24	-- scripts/common.sh@415 -- # IFS='()'
00:09:35.900    10:47:24	-- scripts/common.sh@415 -- # read -r _ spdk_guid _
00:09:35.900     10:47:24	-- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:09:35.900    10:47:24	-- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:09:35.900    10:47:24	-- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:35.900    10:47:24	-- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:35.900   10:47:24	-- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:09:35.900    10:47:24	-- bdev/blockdev.sh@129 -- # get_spdk_gpt
00:09:35.900    10:47:24	-- scripts/common.sh@422 -- # local spdk_guid
00:09:35.900    10:47:24	-- scripts/common.sh@424 -- # [[ -e /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h ]]
00:09:35.900    10:47:24	-- scripts/common.sh@426 -- # GPT_H=/var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:09:35.900    10:47:24	-- scripts/common.sh@427 -- # IFS='()'
00:09:35.900    10:47:24	-- scripts/common.sh@427 -- # read -r _ spdk_guid _
00:09:35.900     10:47:24	-- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /var/jenkins/workspace/nvme-phy-autotest/spdk/module/bdev/gpt/gpt.h
00:09:35.900    10:47:24	-- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:09:35.900    10:47:24	-- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:35.900    10:47:24	-- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:35.900   10:47:24	-- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:09:35.900   10:47:24	-- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:09:37.278  The operation has completed successfully.
00:09:37.278   10:47:25	-- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:09:38.215  The operation has completed successfully.
00:09:38.215   10:47:26	-- bdev/blockdev.sh@132 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:09:41.506  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:09:41.506  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:09:44.793  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:09:44.793   10:47:33	-- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs
00:09:44.793   10:47:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.793   10:47:33	-- common/autotest_common.sh@10 -- # set +x
00:09:44.793  []
00:09:44.793   10:47:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:44.793   10:47:33	-- bdev/blockdev.sh@134 -- # setup_nvme_conf
00:09:44.793   10:47:33	-- bdev/blockdev.sh@79 -- # local json
00:09:44.793   10:47:33	-- bdev/blockdev.sh@80 -- # mapfile -t json
00:09:44.793    10:47:33	-- bdev/blockdev.sh@80 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:09:44.794   10:47:33	-- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:5e:00.0" } } ] }'\'''
00:09:44.794   10:47:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:44.794   10:47:33	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485   10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485   10:47:36	-- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine
00:09:47.485   10:47:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.485   10:47:36	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485   10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485   10:47:36	-- bdev/blockdev.sh@738 -- # cat
00:09:47.485    10:47:36	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel
00:09:47.485    10:47:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.485    10:47:36	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485    10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485    10:47:36	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev
00:09:47.485    10:47:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.485    10:47:36	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485    10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485    10:47:36	-- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf
00:09:47.485    10:47:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.485    10:47:36	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485    10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485   10:47:36	-- bdev/blockdev.sh@746 -- # mapfile -t bdevs
00:09:47.485    10:47:36	-- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs
00:09:47.485    10:47:36	-- common/autotest_common.sh@561 -- # xtrace_disable
00:09:47.485    10:47:36	-- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)'
00:09:47.485    10:47:36	-- common/autotest_common.sh@10 -- # set +x
00:09:47.485    10:47:36	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:09:47.485   10:47:36	-- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name
00:09:47.485    10:47:36	-- bdev/blockdev.sh@747 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 512,' '  "num_blocks": 3907016704,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 2048,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 512,' '  "num_blocks": 3907016703,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "write_zeroes": true,' '    "flush": true,' '    "reset": true,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "nvme_admin": false,' '    "nvme_io": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 3907018752,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:09:47.485    10:47:36	-- bdev/blockdev.sh@747 -- # jq -r .name
00:09:47.744   10:47:36	-- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}")
00:09:47.744   10:47:36	-- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1
00:09:47.744   10:47:36	-- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT
00:09:47.744   10:47:36	-- bdev/blockdev.sh@752 -- # killprocess 2109840
00:09:47.744   10:47:36	-- common/autotest_common.sh@936 -- # '[' -z 2109840 ']'
00:09:47.744   10:47:36	-- common/autotest_common.sh@940 -- # kill -0 2109840
00:09:47.744    10:47:36	-- common/autotest_common.sh@941 -- # uname
00:09:47.744   10:47:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:09:47.744    10:47:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2109840
00:09:47.744   10:47:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:09:47.744   10:47:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:09:47.744   10:47:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2109840'
00:09:47.744  killing process with pid 2109840
00:09:47.744   10:47:36	-- common/autotest_common.sh@955 -- # kill 2109840
00:09:47.744   10:47:36	-- common/autotest_common.sh@960 -- # wait 2109840
00:09:51.937   10:47:40	-- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT
00:09:51.937   10:47:40	-- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:09:51.937   10:47:40	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:09:51.937   10:47:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:51.937   10:47:40	-- common/autotest_common.sh@10 -- # set +x
00:09:51.937  ************************************
00:09:51.937  START TEST bdev_hello_world
00:09:51.937  ************************************
00:09:51.937   10:47:40	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:09:51.937  [2024-12-15 10:47:40.821709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:51.937  [2024-12-15 10:47:40.821779] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114002 ]
00:09:51.937  EAL: No free 2048 kB hugepages reported on node 1
00:09:51.937  [2024-12-15 10:47:40.927683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:52.196  [2024-12-15 10:47:41.021230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:52.455  [2024-12-15 10:47:41.264111] 'OCF_Core' volume operations registered
00:09:52.455  [2024-12-15 10:47:41.267518] 'OCF_Cache' volume operations registered
00:09:52.455  [2024-12-15 10:47:41.271415] 'OCF Composite' volume operations registered
00:09:52.455  [2024-12-15 10:47:41.274885] 'SPDK_block_device' volume operations registered
00:09:55.746  [2024-12-15 10:47:44.137897] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:09:55.746  [2024-12-15 10:47:44.137930] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:09:55.746  [2024-12-15 10:47:44.137948] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:09:55.746  [2024-12-15 10:47:44.140285] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:09:55.746  [2024-12-15 10:47:44.140462] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:09:55.746  [2024-12-15 10:47:44.140483] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:09:55.746  [2024-12-15 10:47:44.144504] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:09:55.746  
00:09:55.746  [2024-12-15 10:47:44.144526] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:09:59.939  
00:09:59.939  real	0m7.385s
00:09:59.939  user	0m6.284s
00:09:59.939  sys	0m0.353s
00:09:59.939   10:47:48	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:09:59.939   10:47:48	-- common/autotest_common.sh@10 -- # set +x
00:09:59.939  ************************************
00:09:59.939  END TEST bdev_hello_world
00:09:59.939  ************************************
00:09:59.939   10:47:48	-- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds ''
00:09:59.939   10:47:48	-- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']'
00:09:59.939   10:47:48	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:09:59.939   10:47:48	-- common/autotest_common.sh@10 -- # set +x
00:09:59.939  ************************************
00:09:59.939  START TEST bdev_bounds
00:09:59.939  ************************************
00:09:59.939   10:47:48	-- common/autotest_common.sh@1114 -- # bdev_bounds ''
00:09:59.939   10:47:48	-- bdev/blockdev.sh@288 -- # bdevio_pid=2114934
00:09:59.939   10:47:48	-- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:09:59.939   10:47:48	-- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 2114934'
00:09:59.939  Process bdevio pid: 2114934
00:09:59.939   10:47:48	-- bdev/blockdev.sh@291 -- # waitforlisten 2114934
00:09:59.939   10:47:48	-- common/autotest_common.sh@829 -- # '[' -z 2114934 ']'
00:09:59.939   10:47:48	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:59.939   10:47:48	-- common/autotest_common.sh@834 -- # local max_retries=100
00:09:59.939   10:47:48	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:59.939  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:59.939   10:47:48	-- common/autotest_common.sh@838 -- # xtrace_disable
00:09:59.939   10:47:48	-- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:09:59.939   10:47:48	-- common/autotest_common.sh@10 -- # set +x
00:09:59.939  [2024-12-15 10:47:48.260563] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:09:59.939  [2024-12-15 10:47:48.260638] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114934 ]
00:09:59.939  EAL: No free 2048 kB hugepages reported on node 1
00:09:59.939  [2024-12-15 10:47:48.362949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:59.939  [2024-12-15 10:47:48.469151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:09:59.939  [2024-12-15 10:47:48.469235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:09:59.939  [2024-12-15 10:47:48.469239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:09:59.939  [2024-12-15 10:47:48.722397] 'OCF_Core' volume operations registered
00:09:59.939  [2024-12-15 10:47:48.725886] 'OCF_Cache' volume operations registered
00:09:59.939  [2024-12-15 10:47:48.729852] 'OCF Composite' volume operations registered
00:09:59.939  [2024-12-15 10:47:48.733333] 'SPDK_block_device' volume operations registered
00:10:03.228   10:47:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:10:03.228   10:47:52	-- common/autotest_common.sh@862 -- # return 0
00:10:03.228   10:47:52	-- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests
00:10:03.488  I/O targets:
00:10:03.488    Nvme0n1p1: 3907016704 blocks of 512 bytes (1907723 MiB)
00:10:03.488    Nvme0n1p2: 3907016703 blocks of 512 bytes (1907723 MiB)
00:10:03.488  
00:10:03.488  
00:10:03.488       CUnit - A unit testing framework for C - Version 2.1-3
00:10:03.488       http://cunit.sourceforge.net/
00:10:03.488  
00:10:03.488  
00:10:03.488  Suite: bdevio tests on: Nvme0n1p2
00:10:03.488    Test: blockdev write read block ...passed
00:10:03.488    Test: blockdev write zeroes read block ...passed
00:10:03.488    Test: blockdev write zeroes read no split ...passed
00:10:03.488    Test: blockdev write zeroes read split ...passed
00:10:03.488    Test: blockdev write zeroes read split partial ...passed
00:10:03.488    Test: blockdev reset ...[2024-12-15 10:47:52.376551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:10:03.488  [2024-12-15 10:47:52.379141] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:10:03.488  passed
00:10:03.488    Test: blockdev write read 8 blocks ...passed
00:10:03.488    Test: blockdev write read size > 128k ...passed
00:10:03.488    Test: blockdev write read invalid size ...passed
00:10:03.488    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:03.488    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:03.488    Test: blockdev write read max offset ...passed
00:10:03.488    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:03.488    Test: blockdev writev readv 8 blocks ...passed
00:10:03.488    Test: blockdev writev readv 30 x 1block ...passed
00:10:03.488    Test: blockdev writev readv block ...passed
00:10:03.488    Test: blockdev writev readv size > 128k ...passed
00:10:03.488    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:03.488    Test: blockdev comparev and writev ...passed
00:10:03.488    Test: blockdev nvme passthru rw ...passed
00:10:03.488    Test: blockdev nvme passthru vendor specific ...passed
00:10:03.488    Test: blockdev nvme admin passthru ...passed
00:10:03.488    Test: blockdev copy ...passed
00:10:03.488  Suite: bdevio tests on: Nvme0n1p1
00:10:03.488    Test: blockdev write read block ...passed
00:10:03.488    Test: blockdev write zeroes read block ...passed
00:10:03.488    Test: blockdev write zeroes read no split ...passed
00:10:03.488    Test: blockdev write zeroes read split ...passed
00:10:03.488    Test: blockdev write zeroes read split partial ...passed
00:10:03.488    Test: blockdev reset ...[2024-12-15 10:47:52.460775] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:10:03.488  [2024-12-15 10:47:52.463044] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:10:03.488  passed
00:10:03.488    Test: blockdev write read 8 blocks ...passed
00:10:03.488    Test: blockdev write read size > 128k ...passed
00:10:03.488    Test: blockdev write read invalid size ...passed
00:10:03.488    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:03.488    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:03.488    Test: blockdev write read max offset ...passed
00:10:03.488    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:03.488    Test: blockdev writev readv 8 blocks ...passed
00:10:03.488    Test: blockdev writev readv 30 x 1block ...passed
00:10:03.488    Test: blockdev writev readv block ...passed
00:10:03.488    Test: blockdev writev readv size > 128k ...passed
00:10:03.747    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:03.747    Test: blockdev comparev and writev ...passed
00:10:03.747    Test: blockdev nvme passthru rw ...passed
00:10:03.747    Test: blockdev nvme passthru vendor specific ...passed
00:10:03.747    Test: blockdev nvme admin passthru ...passed
00:10:03.748    Test: blockdev copy ...passed
00:10:03.748  
00:10:03.748  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:10:03.748                suites      2      2    n/a      0        0
00:10:03.748                 tests     46     46     46      0        0
00:10:03.748               asserts    260    260    260      0      n/a
00:10:03.748  
00:10:03.748  Elapsed time =    0.342 seconds
00:10:03.748  0
00:10:03.748   10:47:52	-- bdev/blockdev.sh@293 -- # killprocess 2114934
00:10:03.748   10:47:52	-- common/autotest_common.sh@936 -- # '[' -z 2114934 ']'
00:10:03.748   10:47:52	-- common/autotest_common.sh@940 -- # kill -0 2114934
00:10:03.748    10:47:52	-- common/autotest_common.sh@941 -- # uname
00:10:03.748   10:47:52	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:10:03.748    10:47:52	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2114934
00:10:03.748   10:47:52	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:10:03.748   10:47:52	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:10:03.748   10:47:52	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2114934'
00:10:03.748  killing process with pid 2114934
00:10:03.748   10:47:52	-- common/autotest_common.sh@955 -- # kill 2114934
00:10:03.748   10:47:52	-- common/autotest_common.sh@960 -- # wait 2114934
00:10:07.946   10:47:56	-- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT
00:10:07.946  
00:10:07.946  real	0m8.444s
00:10:07.946  user	0m24.607s
00:10:07.946  sys	0m0.715s
00:10:07.946   10:47:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:07.946   10:47:56	-- common/autotest_common.sh@10 -- # set +x
00:10:07.946  ************************************
00:10:07.946  END TEST bdev_bounds
00:10:07.946  ************************************
00:10:07.946   10:47:56	-- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:10:07.946   10:47:56	-- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']'
00:10:07.946   10:47:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:07.946   10:47:56	-- common/autotest_common.sh@10 -- # set +x
00:10:07.946  ************************************
00:10:07.946  START TEST bdev_nbd
00:10:07.946  ************************************
00:10:07.946   10:47:56	-- common/autotest_common.sh@1114 -- # nbd_function_test /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:10:07.946    10:47:56	-- bdev/blockdev.sh@298 -- # uname -s
00:10:07.946   10:47:56	-- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]]
00:10:07.946   10:47:56	-- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:07.946   10:47:56	-- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:10:07.946   10:47:56	-- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:10:07.946   10:47:56	-- bdev/blockdev.sh@302 -- # local bdev_all
00:10:07.946   10:47:56	-- bdev/blockdev.sh@303 -- # local bdev_num=2
00:10:07.946   10:47:56	-- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]]
00:10:07.946   10:47:56	-- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:07.946   10:47:56	-- bdev/blockdev.sh@309 -- # local nbd_all
00:10:07.946   10:47:56	-- bdev/blockdev.sh@310 -- # bdev_num=2
00:10:07.946   10:47:56	-- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:07.946   10:47:56	-- bdev/blockdev.sh@312 -- # local nbd_list
00:10:07.946   10:47:56	-- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:10:07.946   10:47:56	-- bdev/blockdev.sh@313 -- # local bdev_list
00:10:07.946   10:47:56	-- bdev/blockdev.sh@316 -- # nbd_pid=2116199
00:10:07.946   10:47:56	-- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:10:07.946   10:47:56	-- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json ''
00:10:07.946   10:47:56	-- bdev/blockdev.sh@318 -- # waitforlisten 2116199 /var/tmp/spdk-nbd.sock
00:10:07.946   10:47:56	-- common/autotest_common.sh@829 -- # '[' -z 2116199 ']'
00:10:07.946   10:47:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:07.946   10:47:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:10:07.946   10:47:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:07.946  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:07.946   10:47:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:10:07.946   10:47:56	-- common/autotest_common.sh@10 -- # set +x
00:10:07.946  [2024-12-15 10:47:56.762354] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:07.946  [2024-12-15 10:47:56.762425] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:07.946  EAL: No free 2048 kB hugepages reported on node 1
00:10:07.946  [2024-12-15 10:47:56.866145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:08.205  [2024-12-15 10:47:56.964489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:08.465  [2024-12-15 10:47:57.221580] 'OCF_Core' volume operations registered
00:10:08.465  [2024-12-15 10:47:57.225094] 'OCF_Cache' volume operations registered
00:10:08.465  [2024-12-15 10:47:57.229060] 'OCF Composite' volume operations registered
00:10:08.465  [2024-12-15 10:47:57.232577] 'SPDK_block_device' volume operations registered
00:10:11.756   10:48:00	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:10:11.756   10:48:00	-- common/autotest_common.sh@862 -- # return 0
00:10:11.756   10:48:00	-- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@114 -- # local bdev_list
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@23 -- # local bdev_list
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@24 -- # local i
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@25 -- # local nbd_device
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:10:11.756   10:48:00	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:10:11.756    10:48:00	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:10:12.015   10:48:00	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:10:12.015    10:48:00	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:10:12.016   10:48:00	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:10:12.016   10:48:00	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:10:12.016   10:48:00	-- common/autotest_common.sh@867 -- # local i
00:10:12.016   10:48:00	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:10:12.016   10:48:00	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:10:12.016   10:48:00	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:10:12.016   10:48:00	-- common/autotest_common.sh@871 -- # break
00:10:12.016   10:48:00	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:10:12.016   10:48:00	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:10:12.016   10:48:00	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:12.016  1+0 records in
00:10:12.016  1+0 records out
00:10:12.016  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027616 s, 14.8 MB/s
00:10:12.016    10:48:00	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:12.016   10:48:00	-- common/autotest_common.sh@884 -- # size=4096
00:10:12.016   10:48:00	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:12.016   10:48:00	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:10:12.016   10:48:00	-- common/autotest_common.sh@887 -- # return 0
00:10:12.016   10:48:00	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:12.016   10:48:00	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:10:12.016    10:48:00	-- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:10:12.275   10:48:01	-- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:10:12.275    10:48:01	-- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:10:12.275   10:48:01	-- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:10:12.275   10:48:01	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:10:12.275   10:48:01	-- common/autotest_common.sh@867 -- # local i
00:10:12.275   10:48:01	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:10:12.275   10:48:01	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:10:12.275   10:48:01	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:10:12.275   10:48:01	-- common/autotest_common.sh@871 -- # break
00:10:12.275   10:48:01	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:10:12.275   10:48:01	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:10:12.275   10:48:01	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:12.275  1+0 records in
00:10:12.275  1+0 records out
00:10:12.275  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025323 s, 16.2 MB/s
00:10:12.275    10:48:01	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:12.275   10:48:01	-- common/autotest_common.sh@884 -- # size=4096
00:10:12.275   10:48:01	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:12.275   10:48:01	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:10:12.275   10:48:01	-- common/autotest_common.sh@887 -- # return 0
00:10:12.275   10:48:01	-- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:12.275   10:48:01	-- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:10:12.275    10:48:01	-- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:10:12.534    {
00:10:12.534      "nbd_device": "/dev/nbd0",
00:10:12.534      "bdev_name": "Nvme0n1p1"
00:10:12.534    },
00:10:12.534    {
00:10:12.534      "nbd_device": "/dev/nbd1",
00:10:12.534      "bdev_name": "Nvme0n1p2"
00:10:12.534    }
00:10:12.534  ]'
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:10:12.534    10:48:01	-- bdev/nbd_common.sh@119 -- # echo '[
00:10:12.534    {
00:10:12.534      "nbd_device": "/dev/nbd0",
00:10:12.534      "bdev_name": "Nvme0n1p1"
00:10:12.534    },
00:10:12.534    {
00:10:12.534      "nbd_device": "/dev/nbd1",
00:10:12.534      "bdev_name": "Nvme0n1p2"
00:10:12.534    }
00:10:12.534  ]'
00:10:12.534    10:48:01	-- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@51 -- # local i
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.534   10:48:01	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:12.794    10:48:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@41 -- # break
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@45 -- # return 0
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.794   10:48:01	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:13.053    10:48:01	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@41 -- # break
00:10:13.053   10:48:01	-- bdev/nbd_common.sh@45 -- # return 0
00:10:13.053    10:48:01	-- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:13.053    10:48:01	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:13.053     10:48:01	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:13.312    10:48:02	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:13.312     10:48:02	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:13.312     10:48:02	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:13.312    10:48:02	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:13.312     10:48:02	-- bdev/nbd_common.sh@65 -- # echo ''
00:10:13.312     10:48:02	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:13.312     10:48:02	-- bdev/nbd_common.sh@65 -- # true
00:10:13.312    10:48:02	-- bdev/nbd_common.sh@65 -- # count=0
00:10:13.312    10:48:02	-- bdev/nbd_common.sh@66 -- # echo 0
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@122 -- # count=0
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@127 -- # return 0
00:10:13.312   10:48:02	-- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@12 -- # local i
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:13.312   10:48:02	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:10:13.571  /dev/nbd0
00:10:13.571    10:48:02	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:13.571   10:48:02	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:13.571   10:48:02	-- common/autotest_common.sh@866 -- # local nbd_name=nbd0
00:10:13.571   10:48:02	-- common/autotest_common.sh@867 -- # local i
00:10:13.571   10:48:02	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:10:13.571   10:48:02	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:10:13.571   10:48:02	-- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions
00:10:13.571   10:48:02	-- common/autotest_common.sh@871 -- # break
00:10:13.571   10:48:02	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:10:13.571   10:48:02	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:10:13.571   10:48:02	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:13.571  1+0 records in
00:10:13.571  1+0 records out
00:10:13.571  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289037 s, 14.2 MB/s
00:10:13.571    10:48:02	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:13.571   10:48:02	-- common/autotest_common.sh@884 -- # size=4096
00:10:13.571   10:48:02	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:13.571   10:48:02	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:10:13.571   10:48:02	-- common/autotest_common.sh@887 -- # return 0
00:10:13.571   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:13.571   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:13.571   10:48:02	-- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:10:13.831  /dev/nbd1
00:10:13.831    10:48:02	-- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:13.831   10:48:02	-- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:13.831   10:48:02	-- common/autotest_common.sh@866 -- # local nbd_name=nbd1
00:10:13.831   10:48:02	-- common/autotest_common.sh@867 -- # local i
00:10:13.831   10:48:02	-- common/autotest_common.sh@869 -- # (( i = 1 ))
00:10:13.831   10:48:02	-- common/autotest_common.sh@869 -- # (( i <= 20 ))
00:10:13.831   10:48:02	-- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions
00:10:13.831   10:48:02	-- common/autotest_common.sh@871 -- # break
00:10:13.831   10:48:02	-- common/autotest_common.sh@882 -- # (( i = 1 ))
00:10:13.831   10:48:02	-- common/autotest_common.sh@882 -- # (( i <= 20 ))
00:10:13.831   10:48:02	-- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:13.831  1+0 records in
00:10:13.831  1+0 records out
00:10:13.831  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338735 s, 12.1 MB/s
00:10:13.831    10:48:02	-- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:13.831   10:48:02	-- common/autotest_common.sh@884 -- # size=4096
00:10:13.831   10:48:02	-- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdtest
00:10:13.831   10:48:02	-- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']'
00:10:13.831   10:48:02	-- common/autotest_common.sh@887 -- # return 0
00:10:13.831   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:13.831   10:48:02	-- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:10:13.831    10:48:02	-- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:13.831    10:48:02	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:13.831     10:48:02	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:14.090    10:48:03	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:14.090    {
00:10:14.090      "nbd_device": "/dev/nbd0",
00:10:14.090      "bdev_name": "Nvme0n1p1"
00:10:14.090    },
00:10:14.090    {
00:10:14.090      "nbd_device": "/dev/nbd1",
00:10:14.090      "bdev_name": "Nvme0n1p2"
00:10:14.090    }
00:10:14.090  ]'
00:10:14.090     10:48:03	-- bdev/nbd_common.sh@64 -- # echo '[
00:10:14.090    {
00:10:14.090      "nbd_device": "/dev/nbd0",
00:10:14.090      "bdev_name": "Nvme0n1p1"
00:10:14.090    },
00:10:14.090    {
00:10:14.090      "nbd_device": "/dev/nbd1",
00:10:14.090      "bdev_name": "Nvme0n1p2"
00:10:14.090    }
00:10:14.090  ]'
00:10:14.090     10:48:03	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:14.090    10:48:03	-- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:14.090  /dev/nbd1'
00:10:14.090     10:48:03	-- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:14.090  /dev/nbd1'
00:10:14.090     10:48:03	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:14.090    10:48:03	-- bdev/nbd_common.sh@65 -- # count=2
00:10:14.090    10:48:03	-- bdev/nbd_common.sh@66 -- # echo 2
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@95 -- # count=2
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@71 -- # local operation=write
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:14.090   10:48:03	-- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:10:14.350  256+0 records in
00:10:14.350  256+0 records out
00:10:14.350  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115655 s, 90.7 MB/s
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:14.350  256+0 records in
00:10:14.350  256+0 records out
00:10:14.350  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.041198 s, 25.5 MB/s
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:14.350  256+0 records in
00:10:14.350  256+0 records out
00:10:14.350  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0431772 s, 24.3 MB/s
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nbdrandtest
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:14.350   10:48:03	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:14.351   10:48:03	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:14.351   10:48:03	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:14.351   10:48:03	-- bdev/nbd_common.sh@51 -- # local i
00:10:14.351   10:48:03	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.351   10:48:03	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:14.610    10:48:03	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@41 -- # break
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@45 -- # return 0
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.610   10:48:03	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:14.869    10:48:03	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@41 -- # break
00:10:14.869   10:48:03	-- bdev/nbd_common.sh@45 -- # return 0
00:10:14.869    10:48:03	-- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:14.869    10:48:03	-- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:14.869     10:48:03	-- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:15.128    10:48:04	-- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:15.128     10:48:04	-- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:15.128     10:48:04	-- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:15.128    10:48:04	-- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:15.128     10:48:04	-- bdev/nbd_common.sh@65 -- # echo ''
00:10:15.128     10:48:04	-- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:15.128     10:48:04	-- bdev/nbd_common.sh@65 -- # true
00:10:15.128    10:48:04	-- bdev/nbd_common.sh@65 -- # count=0
00:10:15.128    10:48:04	-- bdev/nbd_common.sh@66 -- # echo 0
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@104 -- # count=0
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@109 -- # return 0
00:10:15.128   10:48:04	-- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@132 -- # local nbd_list
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@133 -- # local mkfs_ret
00:10:15.128   10:48:04	-- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:10:15.387  malloc_lvol_verify
00:10:15.387   10:48:04	-- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:10:15.647  ca4447a6-3d51-49f7-b0ea-18228af29934
00:10:15.647   10:48:04	-- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:10:15.907  0adac07a-b0e9-49de-ab56-cdbdec7ff61f
00:10:15.907   10:48:04	-- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:10:16.166  /dev/nbd0
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0
00:10:16.166  mke2fs 1.47.0 (5-Feb-2023)
00:10:16.166  Discarding device blocks:    0/4096         done                            
00:10:16.166  Creating filesystem with 4096 1k blocks and 1024 inodes
00:10:16.166  
00:10:16.166  Allocating group tables: 0/1   done                            
00:10:16.166  Writing inode tables: 0/1   done                            
00:10:16.166  Creating journal (1024 blocks): done
00:10:16.166  Writing superblocks and filesystem accounting information: 0/1   done
00:10:16.166  
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@141 -- # mkfs_ret=0
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@51 -- # local i
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:16.166   10:48:05	-- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:16.424    10:48:05	-- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@41 -- # break
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@45 -- # return 0
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']'
00:10:16.424   10:48:05	-- bdev/nbd_common.sh@147 -- # return 0
00:10:16.424   10:48:05	-- bdev/blockdev.sh@324 -- # killprocess 2116199
00:10:16.424   10:48:05	-- common/autotest_common.sh@936 -- # '[' -z 2116199 ']'
00:10:16.424   10:48:05	-- common/autotest_common.sh@940 -- # kill -0 2116199
00:10:16.424    10:48:05	-- common/autotest_common.sh@941 -- # uname
00:10:16.424   10:48:05	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:10:16.424    10:48:05	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2116199
00:10:16.683   10:48:05	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:10:16.683   10:48:05	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:10:16.683   10:48:05	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2116199'
00:10:16.683  killing process with pid 2116199
00:10:16.683   10:48:05	-- common/autotest_common.sh@955 -- # kill 2116199
00:10:16.683   10:48:05	-- common/autotest_common.sh@960 -- # wait 2116199
00:10:20.878   10:48:09	-- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT
00:10:20.878  
00:10:20.878  real	0m12.846s
00:10:20.878  user	0m15.417s
00:10:20.878  sys	0m2.628s
00:10:20.878   10:48:09	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:20.878   10:48:09	-- common/autotest_common.sh@10 -- # set +x
00:10:20.878  ************************************
00:10:20.878  END TEST bdev_nbd
00:10:20.878  ************************************
00:10:20.878   10:48:09	-- bdev/blockdev.sh@761 -- # [[ y == y ]]
00:10:20.878   10:48:09	-- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']'
00:10:20.878   10:48:09	-- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']'
00:10:20.878   10:48:09	-- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:10:20.878  skipping fio tests on NVMe due to multi-ns failures.
00:10:20.878   10:48:09	-- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT
00:10:20.878   10:48:09	-- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:10:20.878   10:48:09	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:10:20.878   10:48:09	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:20.878   10:48:09	-- common/autotest_common.sh@10 -- # set +x
00:10:20.878  ************************************
00:10:20.878  START TEST bdev_verify
00:10:20.878  ************************************
00:10:20.878   10:48:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:10:20.878  [2024-12-15 10:48:09.647392] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:20.878  [2024-12-15 10:48:09.647459] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118458 ]
00:10:20.878  EAL: No free 2048 kB hugepages reported on node 1
00:10:20.879  [2024-12-15 10:48:09.752017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:20.879  [2024-12-15 10:48:09.846614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:20.879  [2024-12-15 10:48:09.846628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:21.137  [2024-12-15 10:48:10.097162] 'OCF_Core' volume operations registered
00:10:21.137  [2024-12-15 10:48:10.100501] 'OCF_Cache' volume operations registered
00:10:21.137  [2024-12-15 10:48:10.104265] 'OCF Composite' volume operations registered
00:10:21.137  [2024-12-15 10:48:10.107600] 'SPDK_block_device' volume operations registered
00:10:24.427  Running I/O for 5 seconds...
00:10:29.699  
00:10:29.699                                                                                                  Latency(us)
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:29.699  	 Verification LBA range: start 0x0 length 0xe8e0580
00:10:29.699  	 Nvme0n1p1           :       5.03    7580.32      29.61       0.00     0.00   16835.38    1866.35   17324.30
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:29.699  	 Verification LBA range: start 0xe8e0580 length 0xe8e0580
00:10:29.699  	 Nvme0n1p1           :       5.03    7627.42      29.79       0.00     0.00   16706.23    1624.15   16868.40
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:29.699  	 Verification LBA range: start 0x0 length 0xe8e057f
00:10:29.699  	 Nvme0n1p2           :       5.03    7560.42      29.53       0.00     0.00   16860.96    3319.54   21883.33
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:29.699  	 Verification LBA range: start 0xe8e057f length 0xe8e057f
00:10:29.699  	 Nvme0n1p2           :       5.02    7620.80      29.77       0.00     0.00   16739.41    6724.56   16982.37
00:10:29.699  
[2024-12-15T09:48:18.715Z]  ===================================================================================================================
00:10:29.699  
[2024-12-15T09:48:18.715Z]  Total                       :              30388.96     118.71       0.00     0.00   16785.28    1624.15   21883.33
00:10:33.892  
00:10:33.892  real	0m12.594s
00:10:33.892  user	0m23.589s
00:10:33.892  sys	0m0.409s
00:10:33.892   10:48:22	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:33.892   10:48:22	-- common/autotest_common.sh@10 -- # set +x
00:10:33.892  ************************************
00:10:33.892  END TEST bdev_verify
00:10:33.892  ************************************
00:10:33.892   10:48:22	-- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:33.892   10:48:22	-- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']'
00:10:33.892   10:48:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:33.892   10:48:22	-- common/autotest_common.sh@10 -- # set +x
00:10:33.892  ************************************
00:10:33.892  START TEST bdev_verify_big_io
00:10:33.892  ************************************
00:10:33.892   10:48:22	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:33.892  [2024-12-15 10:48:22.298111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:33.892  [2024-12-15 10:48:22.298192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2120214 ]
00:10:33.892  EAL: No free 2048 kB hugepages reported on node 1
00:10:33.892  [2024-12-15 10:48:22.406435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:33.892  [2024-12-15 10:48:22.504385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:10:33.892  [2024-12-15 10:48:22.504391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:33.892  [2024-12-15 10:48:22.731304] 'OCF_Core' volume operations registered
00:10:33.892  [2024-12-15 10:48:22.734506] 'OCF_Cache' volume operations registered
00:10:33.892  [2024-12-15 10:48:22.738097] 'OCF Composite' volume operations registered
00:10:33.892  [2024-12-15 10:48:22.741300] 'SPDK_block_device' volume operations registered
00:10:37.182  Running I/O for 5 seconds...
00:10:42.470  
00:10:42.470                                                                                                  Latency(us)
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:42.470  	 Verification LBA range: start 0x0 length 0xe8e058
00:10:42.470  	 Nvme0n1p1           :       5.20     697.93      43.62       0.00     0.00  181120.79    3675.71  195126.32
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:42.470  	 Verification LBA range: start 0xe8e058 length 0xe8e058
00:10:42.470  	 Nvme0n1p1           :       5.21     712.64      44.54       0.00     0.00  177399.59    3276.80  205156.17
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:10:42.470  	 Verification LBA range: start 0x0 length 0xe8e057
00:10:42.470  	 Nvme0n1p2           :       5.20     697.42      43.59       0.00     0.00  178121.31    2991.86  197861.73
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:10:42.470  	 Verification LBA range: start 0xe8e057 length 0xe8e057
00:10:42.470  	 Nvme0n1p2           :       5.22     712.14      44.51       0.00     0.00  174545.61    2706.92  204244.37
00:10:42.470  
[2024-12-15T09:48:31.486Z]  ===================================================================================================================
00:10:42.470  
[2024-12-15T09:48:31.486Z]  Total                       :               2820.13     176.26       0.00     0.00  177775.46    2706.92  205156.17
00:10:46.663  
00:10:46.663  real	0m12.682s
00:10:46.663  user	0m23.798s
00:10:46.663  sys	0m0.367s
00:10:46.663   10:48:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:46.663   10:48:34	-- common/autotest_common.sh@10 -- # set +x
00:10:46.663  ************************************
00:10:46.663  END TEST bdev_verify_big_io
00:10:46.663  ************************************
00:10:46.663   10:48:34	-- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:46.663   10:48:34	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:46.663   10:48:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:46.663   10:48:34	-- common/autotest_common.sh@10 -- # set +x
00:10:46.663  ************************************
00:10:46.663  START TEST bdev_write_zeroes
00:10:46.663  ************************************
00:10:46.663   10:48:34	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:46.663  [2024-12-15 10:48:35.033458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:46.663  [2024-12-15 10:48:35.033529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121883 ]
00:10:46.663  EAL: No free 2048 kB hugepages reported on node 1
00:10:46.663  [2024-12-15 10:48:35.138755] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:46.663  [2024-12-15 10:48:35.237323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:46.663  [2024-12-15 10:48:35.481111] 'OCF_Core' volume operations registered
00:10:46.663  [2024-12-15 10:48:35.484580] 'OCF_Cache' volume operations registered
00:10:46.663  [2024-12-15 10:48:35.488539] 'OCF Composite' volume operations registered
00:10:46.663  [2024-12-15 10:48:35.492050] 'SPDK_block_device' volume operations registered
00:10:49.957  Running I/O for 1 seconds...
00:10:50.527  
00:10:50.527                                                                                                  Latency(us)
00:10:50.527  
[2024-12-15T09:48:39.543Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:50.527  
[2024-12-15T09:48:39.543Z]  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:50.527  	 Nvme0n1p1           :       1.01   24203.54      94.55       0.00     0.00    5275.69    2849.39    6069.20
00:10:50.527  
[2024-12-15T09:48:39.543Z]  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:10:50.527  	 Nvme0n1p2           :       1.01   24122.10      94.23       0.00     0.00    5283.98    3604.48    6040.71
00:10:50.527  
[2024-12-15T09:48:39.543Z]  ===================================================================================================================
00:10:50.527  
[2024-12-15T09:48:39.543Z]  Total                       :              48325.65     188.77       0.00     0.00    5279.83    2849.39    6069.20
00:10:54.961  
00:10:54.961  real	0m8.421s
00:10:54.961  user	0m7.291s
00:10:54.961  sys	0m0.385s
00:10:54.961   10:48:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:54.961   10:48:43	-- common/autotest_common.sh@10 -- # set +x
00:10:54.961  ************************************
00:10:54.961  END TEST bdev_write_zeroes
00:10:54.961  ************************************
00:10:54.961   10:48:43	-- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:54.961   10:48:43	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:54.961   10:48:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:54.961   10:48:43	-- common/autotest_common.sh@10 -- # set +x
00:10:54.961  ************************************
00:10:54.961  START TEST bdev_json_nonenclosed
00:10:54.961  ************************************
00:10:54.961   10:48:43	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:54.961  [2024-12-15 10:48:43.506243] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:54.961  [2024-12-15 10:48:43.506313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122981 ]
00:10:54.961  EAL: No free 2048 kB hugepages reported on node 1
00:10:54.961  [2024-12-15 10:48:43.601462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:54.961  [2024-12-15 10:48:43.699389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:54.961  [2024-12-15 10:48:43.699508] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:10:54.961  [2024-12-15 10:48:43.699530] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:54.961  
00:10:54.961  real	0m0.361s
00:10:54.961  user	0m0.236s
00:10:54.961  sys	0m0.122s
00:10:54.961   10:48:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:54.961   10:48:43	-- common/autotest_common.sh@10 -- # set +x
00:10:54.961  ************************************
00:10:54.961  END TEST bdev_json_nonenclosed
00:10:54.961  ************************************
00:10:54.961   10:48:43	-- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:54.961   10:48:43	-- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']'
00:10:54.961   10:48:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:54.961   10:48:43	-- common/autotest_common.sh@10 -- # set +x
00:10:54.961  ************************************
00:10:54.961  START TEST bdev_json_nonarray
00:10:54.961  ************************************
00:10:54.961   10:48:43	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:10:54.961  [2024-12-15 10:48:43.922766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:54.961  [2024-12-15 10:48:43.922844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123157 ]
00:10:54.961  EAL: No free 2048 kB hugepages reported on node 1
00:10:55.221  [2024-12-15 10:48:44.030487] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:55.221  [2024-12-15 10:48:44.133049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:55.221  [2024-12-15 10:48:44.133182] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:10:55.221  [2024-12-15 10:48:44.133206] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:10:55.480  
00:10:55.480  real	0m0.382s
00:10:55.480  user	0m0.244s
00:10:55.480  sys	0m0.135s
00:10:55.480   10:48:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:10:55.480   10:48:44	-- common/autotest_common.sh@10 -- # set +x
00:10:55.480  ************************************
00:10:55.480  END TEST bdev_json_nonarray
00:10:55.480  ************************************
00:10:55.480   10:48:44	-- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]]
00:10:55.480   10:48:44	-- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]]
00:10:55.480   10:48:44	-- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:10:55.480   10:48:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:10:55.480   10:48:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:10:55.480   10:48:44	-- common/autotest_common.sh@10 -- # set +x
00:10:55.480  ************************************
00:10:55.480  START TEST bdev_gpt_uuid
00:10:55.480  ************************************
00:10:55.480   10:48:44	-- common/autotest_common.sh@1114 -- # bdev_gpt_uuid
00:10:55.480   10:48:44	-- bdev/blockdev.sh@612 -- # local bdev
00:10:55.480   10:48:44	-- bdev/blockdev.sh@614 -- # start_spdk_tgt
00:10:55.480   10:48:44	-- bdev/blockdev.sh@45 -- # spdk_tgt_pid=2123193
00:10:55.480   10:48:44	-- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:55.480   10:48:44	-- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt '' ''
00:10:55.480   10:48:44	-- bdev/blockdev.sh@47 -- # waitforlisten 2123193
00:10:55.480   10:48:44	-- common/autotest_common.sh@829 -- # '[' -z 2123193 ']'
00:10:55.480   10:48:44	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:55.480   10:48:44	-- common/autotest_common.sh@834 -- # local max_retries=100
00:10:55.480   10:48:44	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:55.480  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:55.480   10:48:44	-- common/autotest_common.sh@838 -- # xtrace_disable
00:10:55.480   10:48:44	-- common/autotest_common.sh@10 -- # set +x
00:10:55.480  [2024-12-15 10:48:44.366222] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:10:55.480  [2024-12-15 10:48:44.366294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123193 ]
00:10:55.480  EAL: No free 2048 kB hugepages reported on node 1
00:10:55.480  [2024-12-15 10:48:44.472433] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:55.740  [2024-12-15 10:48:44.572994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:10:55.740  [2024-12-15 10:48:44.573143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:10:55.740  [2024-12-15 10:48:44.755832] 'OCF_Core' volume operations registered
00:10:56.000  [2024-12-15 10:48:44.759046] 'OCF_Cache' volume operations registered
00:10:56.000  [2024-12-15 10:48:44.762710] 'OCF Composite' volume operations registered
00:10:56.000  [2024-12-15 10:48:44.765928] 'SPDK_block_device' volume operations registered
00:10:56.567   10:48:45	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:10:56.567   10:48:45	-- common/autotest_common.sh@862 -- # return 0
00:10:56.567   10:48:45	-- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:10:56.568   10:48:45	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:56.568   10:48:45	-- common/autotest_common.sh@10 -- # set +x
00:10:59.862  Some configs were skipped because the RPC state that can call them passed over.
00:10:59.862   10:48:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.862   10:48:48	-- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine
00:10:59.862   10:48:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.862   10:48:48	-- common/autotest_common.sh@10 -- # set +x
00:10:59.862   10:48:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.862    10:48:48	-- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:10:59.862    10:48:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.862    10:48:48	-- common/autotest_common.sh@10 -- # set +x
00:10:59.862    10:48:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.862   10:48:48	-- bdev/blockdev.sh@619 -- # bdev='[
00:10:59.862  {
00:10:59.862  "name": "Nvme0n1p1",
00:10:59.862  "aliases": [
00:10:59.862  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:10:59.862  ],
00:10:59.862  "product_name": "GPT Disk",
00:10:59.862  "block_size": 512,
00:10:59.862  "num_blocks": 3907016704,
00:10:59.862  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:59.862  "assigned_rate_limits": {
00:10:59.862  "rw_ios_per_sec": 0,
00:10:59.862  "rw_mbytes_per_sec": 0,
00:10:59.862  "r_mbytes_per_sec": 0,
00:10:59.862  "w_mbytes_per_sec": 0
00:10:59.862  },
00:10:59.862  "claimed": false,
00:10:59.862  "zoned": false,
00:10:59.862  "supported_io_types": {
00:10:59.862  "read": true,
00:10:59.862  "write": true,
00:10:59.862  "unmap": true,
00:10:59.862  "write_zeroes": true,
00:10:59.862  "flush": true,
00:10:59.862  "reset": true,
00:10:59.862  "compare": false,
00:10:59.862  "compare_and_write": false,
00:10:59.862  "abort": true,
00:10:59.862  "nvme_admin": false,
00:10:59.862  "nvme_io": false
00:10:59.862  },
00:10:59.862  "driver_specific": {
00:10:59.862  "gpt": {
00:10:59.862  "base_bdev": "Nvme0n1",
00:10:59.862  "offset_blocks": 2048,
00:10:59.862  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:10:59.862  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:10:59.862  "partition_name": "SPDK_TEST_first"
00:10:59.862  }
00:10:59.862  }
00:10:59.862  }
00:10:59.862  ]'
00:10:59.862    10:48:48	-- bdev/blockdev.sh@620 -- # jq -r length
00:10:59.862   10:48:48	-- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]]
00:10:59.862    10:48:48	-- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]'
00:10:59.862   10:48:48	-- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:59.862    10:48:48	-- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:59.862   10:48:48	-- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:10:59.862    10:48:48	-- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:10:59.862    10:48:48	-- common/autotest_common.sh@561 -- # xtrace_disable
00:10:59.862    10:48:48	-- common/autotest_common.sh@10 -- # set +x
00:10:59.862    10:48:48	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:10:59.862   10:48:48	-- bdev/blockdev.sh@624 -- # bdev='[
00:10:59.862  {
00:10:59.862  "name": "Nvme0n1p2",
00:10:59.862  "aliases": [
00:10:59.862  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:10:59.862  ],
00:10:59.862  "product_name": "GPT Disk",
00:10:59.862  "block_size": 512,
00:10:59.862  "num_blocks": 3907016703,
00:10:59.862  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:59.862  "assigned_rate_limits": {
00:10:59.862  "rw_ios_per_sec": 0,
00:10:59.862  "rw_mbytes_per_sec": 0,
00:10:59.862  "r_mbytes_per_sec": 0,
00:10:59.862  "w_mbytes_per_sec": 0
00:10:59.862  },
00:10:59.862  "claimed": false,
00:10:59.862  "zoned": false,
00:10:59.862  "supported_io_types": {
00:10:59.862  "read": true,
00:10:59.862  "write": true,
00:10:59.862  "unmap": true,
00:10:59.862  "write_zeroes": true,
00:10:59.862  "flush": true,
00:10:59.862  "reset": true,
00:10:59.862  "compare": false,
00:10:59.862  "compare_and_write": false,
00:10:59.862  "abort": true,
00:10:59.862  "nvme_admin": false,
00:10:59.862  "nvme_io": false
00:10:59.862  },
00:10:59.863  "driver_specific": {
00:10:59.863  "gpt": {
00:10:59.863  "base_bdev": "Nvme0n1",
00:10:59.863  "offset_blocks": 3907018752,
00:10:59.863  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:10:59.863  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:10:59.863  "partition_name": "SPDK_TEST_second"
00:10:59.863  }
00:10:59.863  }
00:10:59.863  }
00:10:59.863  ]'
00:10:59.863    10:48:48	-- bdev/blockdev.sh@625 -- # jq -r length
00:10:59.863   10:48:48	-- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]]
00:10:59.863    10:48:48	-- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]'
00:10:59.863   10:48:48	-- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:59.863    10:48:48	-- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:10:59.863   10:48:48	-- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:10:59.863   10:48:48	-- bdev/blockdev.sh@629 -- # killprocess 2123193
00:10:59.863   10:48:48	-- common/autotest_common.sh@936 -- # '[' -z 2123193 ']'
00:10:59.863   10:48:48	-- common/autotest_common.sh@940 -- # kill -0 2123193
00:10:59.863    10:48:48	-- common/autotest_common.sh@941 -- # uname
00:10:59.863   10:48:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:10:59.863    10:48:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2123193
00:10:59.863   10:48:48	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:10:59.863   10:48:48	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:10:59.863   10:48:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2123193'
00:10:59.863  killing process with pid 2123193
00:10:59.863   10:48:48	-- common/autotest_common.sh@955 -- # kill 2123193
00:10:59.863   10:48:48	-- common/autotest_common.sh@960 -- # wait 2123193
00:11:04.059  
00:11:04.059  real	0m8.550s
00:11:04.059  user	0m8.021s
00:11:04.059  sys	0m0.634s
00:11:04.059   10:48:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:04.059   10:48:52	-- common/autotest_common.sh@10 -- # set +x
00:11:04.059  ************************************
00:11:04.059  END TEST bdev_gpt_uuid
00:11:04.059  ************************************
00:11:04.059   10:48:52	-- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]]
00:11:04.059   10:48:52	-- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT
00:11:04.059   10:48:52	-- bdev/blockdev.sh@809 -- # cleanup
00:11:04.059   10:48:52	-- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/aiofile
00:11:04.059   10:48:52	-- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/bdev/bdev.json
00:11:04.059   10:48:52	-- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]]
00:11:04.059   10:48:52	-- bdev/blockdev.sh@28 -- # [[ gpt == daos ]]
00:11:04.059   10:48:52	-- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]]
00:11:04.059   10:48:52	-- bdev/blockdev.sh@33 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:11:06.597  Waiting for block devices as requested
00:11:06.597  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:11:06.856  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:11:06.856  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:11:07.116  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:11:07.116  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:11:07.116  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:11:07.376  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:11:07.376  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:11:07.376  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:11:07.636  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:11:07.636  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:11:07.636  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:11:07.896  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:11:07.896  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:11:07.896  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:11:08.156  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:11:08.156  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:11:08.156   10:48:57	-- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]]
00:11:08.156   10:48:57	-- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1
00:11:08.416  /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
00:11:08.416  /dev/nvme0n1: 8 bytes were erased at offset 0x3a3817d5e00 (gpt): 45 46 49 20 50 41 52 54
00:11:08.416  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:11:08.416  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:11:08.416   10:48:57	-- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]]
00:11:08.416  
00:11:08.416  real	1m37.998s
00:11:08.416  user	2m13.521s
00:11:08.416  sys	0m13.831s
00:11:08.416   10:48:57	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:08.416   10:48:57	-- common/autotest_common.sh@10 -- # set +x
00:11:08.416  ************************************
00:11:08.416  END TEST blockdev_nvme_gpt
00:11:08.416  ************************************
00:11:08.416   10:48:57	-- spdk/autotest.sh@209 -- # run_test nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh
00:11:08.416   10:48:57	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:08.416   10:48:57	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:08.416   10:48:57	-- common/autotest_common.sh@10 -- # set +x
00:11:08.416  ************************************
00:11:08.416  START TEST nvme
00:11:08.416  ************************************
00:11:08.416   10:48:57	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme.sh
00:11:08.676  * Looking for test storage...
00:11:08.676  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:11:08.676    10:48:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:11:08.676     10:48:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:11:08.676     10:48:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:11:08.676    10:48:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:11:08.676    10:48:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:11:08.676    10:48:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:11:08.676    10:48:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:11:08.676    10:48:57	-- scripts/common.sh@335 -- # IFS=.-:
00:11:08.676    10:48:57	-- scripts/common.sh@335 -- # read -ra ver1
00:11:08.676    10:48:57	-- scripts/common.sh@336 -- # IFS=.-:
00:11:08.676    10:48:57	-- scripts/common.sh@336 -- # read -ra ver2
00:11:08.676    10:48:57	-- scripts/common.sh@337 -- # local 'op=<'
00:11:08.676    10:48:57	-- scripts/common.sh@339 -- # ver1_l=2
00:11:08.676    10:48:57	-- scripts/common.sh@340 -- # ver2_l=1
00:11:08.676    10:48:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:11:08.676    10:48:57	-- scripts/common.sh@343 -- # case "$op" in
00:11:08.676    10:48:57	-- scripts/common.sh@344 -- # : 1
00:11:08.676    10:48:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:11:08.676    10:48:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:08.676     10:48:57	-- scripts/common.sh@364 -- # decimal 1
00:11:08.676     10:48:57	-- scripts/common.sh@352 -- # local d=1
00:11:08.676     10:48:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:08.676     10:48:57	-- scripts/common.sh@354 -- # echo 1
00:11:08.676    10:48:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:11:08.676     10:48:57	-- scripts/common.sh@365 -- # decimal 2
00:11:08.676     10:48:57	-- scripts/common.sh@352 -- # local d=2
00:11:08.676     10:48:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:08.676     10:48:57	-- scripts/common.sh@354 -- # echo 2
00:11:08.676    10:48:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:11:08.676    10:48:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:11:08.676    10:48:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:11:08.676    10:48:57	-- scripts/common.sh@367 -- # return 0
00:11:08.676    10:48:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:08.676    10:48:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:11:08.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.676  		--rc genhtml_branch_coverage=1
00:11:08.676  		--rc genhtml_function_coverage=1
00:11:08.676  		--rc genhtml_legend=1
00:11:08.676  		--rc geninfo_all_blocks=1
00:11:08.676  		--rc geninfo_unexecuted_blocks=1
00:11:08.676  		
00:11:08.676  		'
00:11:08.676    10:48:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:11:08.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.676  		--rc genhtml_branch_coverage=1
00:11:08.676  		--rc genhtml_function_coverage=1
00:11:08.676  		--rc genhtml_legend=1
00:11:08.676  		--rc geninfo_all_blocks=1
00:11:08.676  		--rc geninfo_unexecuted_blocks=1
00:11:08.676  		
00:11:08.676  		'
00:11:08.676    10:48:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:11:08.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.676  		--rc genhtml_branch_coverage=1
00:11:08.676  		--rc genhtml_function_coverage=1
00:11:08.676  		--rc genhtml_legend=1
00:11:08.676  		--rc geninfo_all_blocks=1
00:11:08.676  		--rc geninfo_unexecuted_blocks=1
00:11:08.676  		
00:11:08.676  		'
00:11:08.676    10:48:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:11:08.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:08.676  		--rc genhtml_branch_coverage=1
00:11:08.676  		--rc genhtml_function_coverage=1
00:11:08.676  		--rc genhtml_legend=1
00:11:08.676  		--rc geninfo_all_blocks=1
00:11:08.676  		--rc geninfo_unexecuted_blocks=1
00:11:08.676  		
00:11:08.676  		'
00:11:08.676   10:48:57	-- nvme/nvme.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:11:11.972  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:11:11.972  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:11:11.972  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:11:11.972  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:11:11.972  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:11:11.973  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:11:15.267  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:11:15.267    10:49:03	-- nvme/nvme.sh@79 -- # uname
00:11:15.267   10:49:03	-- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:11:15.267   10:49:03	-- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:11:15.267   10:49:03	-- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:11:15.267   10:49:03	-- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:11:15.267   10:49:03	-- common/autotest_common.sh@1054 -- # _randomize_va_space=2
00:11:15.267   10:49:03	-- common/autotest_common.sh@1055 -- # echo 0
00:11:15.267   10:49:03	-- common/autotest_common.sh@1057 -- # stubpid=2127080
00:11:15.267   10:49:03	-- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes...
00:11:15.267  Waiting for stub to ready for secondary processes...
00:11:15.267   10:49:03	-- common/autotest_common.sh@1056 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:11:15.267   10:49:03	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:15.267   10:49:03	-- common/autotest_common.sh@1061 -- # [[ -e /proc/2127080 ]]
00:11:15.267   10:49:03	-- common/autotest_common.sh@1062 -- # sleep 1s
00:11:15.267  [2024-12-15 10:49:03.981043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:15.267  [2024-12-15 10:49:03.981102] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:15.267  EAL: No free 2048 kB hugepages reported on node 1
00:11:16.208   10:49:04	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:16.208   10:49:04	-- common/autotest_common.sh@1061 -- # [[ -e /proc/2127080 ]]
00:11:16.208   10:49:04	-- common/autotest_common.sh@1062 -- # sleep 1s
00:11:17.147  [2024-12-15 10:49:05.820784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:17.147  [2024-12-15 10:49:05.928348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:11:17.147  [2024-12-15 10:49:05.928416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:11:17.147  [2024-12-15 10:49:05.928413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:11:17.147   10:49:05	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:17.147   10:49:05	-- common/autotest_common.sh@1061 -- # [[ -e /proc/2127080 ]]
00:11:17.147   10:49:05	-- common/autotest_common.sh@1062 -- # sleep 1s
00:11:18.096   10:49:06	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:18.096   10:49:06	-- common/autotest_common.sh@1061 -- # [[ -e /proc/2127080 ]]
00:11:18.096   10:49:06	-- common/autotest_common.sh@1062 -- # sleep 1s
00:11:19.035   10:49:07	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:19.035   10:49:07	-- common/autotest_common.sh@1061 -- # [[ -e /proc/2127080 ]]
00:11:19.035   10:49:07	-- common/autotest_common.sh@1062 -- # sleep 1s
00:11:19.974  [2024-12-15 10:49:08.936952] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:11:19.974  [2024-12-15 10:49:08.949885] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:11:19.974  [2024-12-15 10:49:08.950055] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:11:19.974   10:49:08	-- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']'
00:11:19.974   10:49:08	-- common/autotest_common.sh@1064 -- # echo done.
00:11:19.974  done.
00:11:19.974   10:49:08	-- nvme/nvme.sh@84 -- # run_test nvme_reset /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:11:19.974   10:49:08	-- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']'
00:11:19.975   10:49:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:19.975   10:49:08	-- common/autotest_common.sh@10 -- # set +x
00:11:19.975  ************************************
00:11:19.975  START TEST nvme_reset
00:11:19.975  ************************************
00:11:19.975   10:49:08	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:11:20.544  [2024-12-15 10:49:09.317892] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.544  [2024-12-15 10:49:09.317977] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.317999] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318017] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318035] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318052] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318069] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318086] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318103] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318120] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318137] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318154] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318170] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318187] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318204] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318221] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318238] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318255] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318276] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318293] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318311] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318328] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318345] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318362] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318379] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318396] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318413] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318430] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318447] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318464] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318482] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318499] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318516] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318533] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318553] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318571] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318588] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318605] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318627] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318645] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318661] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318678] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318695] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318712] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318728] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318745] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318762] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318779] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318795] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318813] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318829] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318852] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318869] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318886] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318903] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318920] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318936] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318954] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318971] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.318987] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.319005] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.319022] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.319039] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:20.545  [2024-12-15 10:49:09.319056] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333046] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333123] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333143] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333160] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333176] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333192] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333209] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333225] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333242] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333258] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333274] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333291] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333307] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333323] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333339] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333355] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333372] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333388] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333404] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333421] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333437] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333457] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333474] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333490] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333507] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333523] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333539] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333556] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333572] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333588] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333604] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333627] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333644] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333660] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333681] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333697] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333713] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333730] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333746] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333762] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333779] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333796] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333812] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.827  [2024-12-15 10:49:14.333828] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333844] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333860] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333877] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333893] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333909] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333925] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333942] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333958] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333980] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.333996] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334018] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334035] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334051] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334067] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334084] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334100] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334116] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334133] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334150] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:25.828  [2024-12-15 10:49:14.334166] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348568] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348624] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348650] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348666] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348683] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348700] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348716] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348733] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348749] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348765] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348781] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348797] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348814] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348830] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348846] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348863] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348879] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348895] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348911] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348928] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348944] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348960] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348976] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.348998] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349015] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349031] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349047] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349063] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349080] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349096] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349112] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349129] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349145] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349161] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349181] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349198] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349214] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349230] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.108  [2024-12-15 10:49:19.349247] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349263] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349279] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349295] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349311] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349329] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349345] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349361] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349377] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349393] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349410] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349426] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349442] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349458] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349474] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349490] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349506] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349522] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349538] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349557] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349573] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349589] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349606] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349627] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349643] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:31.109  [2024-12-15 10:49:19.349659] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:11:36.392  Initializing NVMe Controllers
00:11:36.392  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 0
00:11:36.392  Initialization complete. Launching workers.
00:11:36.392  Starting thread on core 0
00:11:36.392  ========================================================
00:11:36.392            633408 IO completed successfully
00:11:36.392                64 IO completed with error
00:11:36.392  --------------------------------------------------------
00:11:36.392            633472 IO completed total
00:11:36.392            633472 IO submitted
00:11:36.392  Starting thread on core 0
00:11:36.392  ========================================================
00:11:36.392            633536 IO completed successfully
00:11:36.392                64 IO completed with error
00:11:36.392  --------------------------------------------------------
00:11:36.392            633600 IO completed total
00:11:36.392            633600 IO submitted
00:11:36.392  Starting thread on core 0
00:11:36.392  ========================================================
00:11:36.392            633280 IO completed successfully
00:11:36.392                64 IO completed with error
00:11:36.392  --------------------------------------------------------
00:11:36.392            633344 IO completed total
00:11:36.392            633344 IO submitted
00:11:36.392  
00:11:36.392  real	0m15.398s
00:11:36.392  user	0m15.078s
00:11:36.392  sys	0m0.192s
00:11:36.392   10:49:24	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:36.392   10:49:24	-- common/autotest_common.sh@10 -- # set +x
00:11:36.392  ************************************
00:11:36.392  END TEST nvme_reset
00:11:36.392  ************************************
00:11:36.392   10:49:24	-- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:11:36.392   10:49:24	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:36.392   10:49:24	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:36.393   10:49:24	-- common/autotest_common.sh@10 -- # set +x
00:11:36.393  ************************************
00:11:36.393  START TEST nvme_identify
00:11:36.393  ************************************
00:11:36.393   10:49:24	-- common/autotest_common.sh@1114 -- # nvme_identify
00:11:36.393   10:49:24	-- nvme/nvme.sh@12 -- # bdfs=()
00:11:36.393   10:49:24	-- nvme/nvme.sh@12 -- # local bdfs bdf
00:11:36.393   10:49:24	-- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:11:36.393    10:49:24	-- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:11:36.393    10:49:24	-- common/autotest_common.sh@1508 -- # bdfs=()
00:11:36.393    10:49:24	-- common/autotest_common.sh@1508 -- # local bdfs
00:11:36.393    10:49:24	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:36.393     10:49:24	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:36.393     10:49:24	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:11:36.393    10:49:24	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:11:36.393    10:49:24	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:11:36.393   10:49:24	-- nvme/nvme.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -i 0
00:11:36.393  =====================================================
00:11:36.393  NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:11:36.393  =====================================================
00:11:36.393  Controller Capabilities/Features
00:11:36.393  ================================
00:11:36.393  Vendor ID:                             8086
00:11:36.393  Subsystem Vendor ID:                   8086
00:11:36.393  Serial Number:                         BTLJ83030AK84P0DGN
00:11:36.393  Model Number:                          INTEL SSDPE2KX040T8
00:11:36.393  Firmware Version:                      VDV10184
00:11:36.393  Recommended Arb Burst:                 0
00:11:36.393  IEEE OUI Identifier:                   e4 d2 5c
00:11:36.393  Multi-path I/O
00:11:36.393    May have multiple subsystem ports:   No
00:11:36.393    May have multiple controllers:       No
00:11:36.393    Associated with SR-IOV VF:           No
00:11:36.393  Max Data Transfer Size:                131072
00:11:36.393  Max Number of Namespaces:              128
00:11:36.393  Max Number of I/O Queues:              128
00:11:36.393  NVMe Specification Version (VS):       1.2
00:11:36.393  NVMe Specification Version (Identify): 1.2
00:11:36.393  Maximum Queue Entries:                 4096
00:11:36.393  Contiguous Queues Required:            Yes
00:11:36.393  Arbitration Mechanisms Supported
00:11:36.393    Weighted Round Robin:                Supported
00:11:36.393    Vendor Specific:                     Not Supported
00:11:36.393  Reset Timeout:                         60000 ms
00:11:36.393  Doorbell Stride:                       4 bytes
00:11:36.393  NVM Subsystem Reset:                   Not Supported
00:11:36.393  Command Sets Supported
00:11:36.393    NVM Command Set:                     Supported
00:11:36.393  Boot Partition:                        Not Supported
00:11:36.393  Memory Page Size Minimum:              4096 bytes
00:11:36.393  Memory Page Size Maximum:              4096 bytes
00:11:36.393  Persistent Memory Region:              Not Supported
00:11:36.393  Optional Asynchronous Events Supported
00:11:36.393    Namespace Attribute Notices:         Not Supported
00:11:36.393    Firmware Activation Notices:         Supported
00:11:36.393    ANA Change Notices:                  Not Supported
00:11:36.393    PLE Aggregate Log Change Notices:    Not Supported
00:11:36.393    LBA Status Info Alert Notices:       Not Supported
00:11:36.393    EGE Aggregate Log Change Notices:    Not Supported
00:11:36.393    Normal NVM Subsystem Shutdown event: Not Supported
00:11:36.393    Zone Descriptor Change Notices:      Not Supported
00:11:36.393    Discovery Log Change Notices:        Not Supported
00:11:36.393  Controller Attributes
00:11:36.393    128-bit Host Identifier:             Not Supported
00:11:36.393    Non-Operational Permissive Mode:     Not Supported
00:11:36.393    NVM Sets:                            Not Supported
00:11:36.393    Read Recovery Levels:                Not Supported
00:11:36.393    Endurance Groups:                    Not Supported
00:11:36.393    Predictable Latency Mode:            Not Supported
00:11:36.393    Traffic Based Keep ALive:            Not Supported
00:11:36.393    Namespace Granularity:               Not Supported
00:11:36.393    SQ Associations:                     Not Supported
00:11:36.393    UUID List:                           Not Supported
00:11:36.393    Multi-Domain Subsystem:              Not Supported
00:11:36.393    Fixed Capacity Management:           Not Supported
00:11:36.393    Variable Capacity Management:        Not Supported
00:11:36.393    Delete Endurance Group:              Not Supported
00:11:36.393    Delete NVM Set:                      Not Supported
00:11:36.393    Extended LBA Formats Supported:      Not Supported
00:11:36.393    Flexible Data Placement Supported:   Not Supported
00:11:36.393  
00:11:36.393  Controller Memory Buffer Support
00:11:36.393  ================================
00:11:36.393  Supported:                             No
00:11:36.393  
00:11:36.393  Persistent Memory Region Support
00:11:36.393  ================================
00:11:36.393  Supported:                             No
00:11:36.393  
00:11:36.393  Admin Command Set Attributes
00:11:36.393  ============================
00:11:36.393  Security Send/Receive:                 Not Supported
00:11:36.393  Format NVM:                            Supported
00:11:36.393  Firmware Activate/Download:            Supported
00:11:36.393  Namespace Management:                  Supported
00:11:36.393  Device Self-Test:                      Not Supported
00:11:36.393  Directives:                            Not Supported
00:11:36.393  NVMe-MI:                               Not Supported
00:11:36.393  Virtualization Management:             Not Supported
00:11:36.393  Doorbell Buffer Config:                Not Supported
00:11:36.393  Get LBA Status Capability:             Not Supported
00:11:36.393  Command & Feature Lockdown Capability: Not Supported
00:11:36.393  Abort Command Limit:                   4
00:11:36.393  Async Event Request Limit:             4
00:11:36.393  Number of Firmware Slots:              4
00:11:36.393  Firmware Slot 1 Read-Only:             No
00:11:36.393  Firmware Activation Without Reset:     Yes
00:11:36.393  Multiple Update Detection Support:     No
00:11:36.393  Firmware Update Granularity:           No Information Provided
00:11:36.393  Per-Namespace SMART Log:               No
00:11:36.393  Asymmetric Namespace Access Log Page:  Not Supported
00:11:36.393  Subsystem NQN:                         
00:11:36.393  Command Effects Log Page:              Supported
00:11:36.393  Get Log Page Extended Data:            Supported
00:11:36.393  Telemetry Log Pages:                   Supported
00:11:36.393  Persistent Event Log Pages:            Not Supported
00:11:36.393  Supported Log Pages Log Page:          May Support
00:11:36.393  Commands Supported & Effects Log Page: Not Supported
00:11:36.393  Feature Identifiers & Effects Log Page:May Support
00:11:36.393  NVMe-MI Commands & Effects Log Page:   May Support
00:11:36.393  Data Area 4 for Telemetry Log:         Not Supported
00:11:36.393  Error Log Page Entries Supported:      64
00:11:36.393  Keep Alive:                            Not Supported
00:11:36.393  
00:11:36.393  NVM Command Set Attributes
00:11:36.393  ==========================
00:11:36.393  Submission Queue Entry Size
00:11:36.393    Max:                       64
00:11:36.393    Min:                       64
00:11:36.393  Completion Queue Entry Size
00:11:36.393    Max:                       16
00:11:36.393    Min:                       16
00:11:36.393  Number of Namespaces:        128
00:11:36.393  Compare Command:             Not Supported
00:11:36.393  Write Uncorrectable Command: Supported
00:11:36.393  Dataset Management Command:  Supported
00:11:36.393  Write Zeroes Command:        Not Supported
00:11:36.393  Set Features Save Field:     Not Supported
00:11:36.393  Reservations:                Not Supported
00:11:36.393  Timestamp:                   Not Supported
00:11:36.393  Copy:                        Not Supported
00:11:36.393  Volatile Write Cache:        Not Present
00:11:36.393  Atomic Write Unit (Normal):  1
00:11:36.393  Atomic Write Unit (PFail):   1
00:11:36.393  Atomic Compare & Write Unit: 1
00:11:36.393  Fused Compare & Write:       Not Supported
00:11:36.393  Scatter-Gather List
00:11:36.393    SGL Command Set:           Not Supported
00:11:36.393    SGL Keyed:                 Not Supported
00:11:36.393    SGL Bit Bucket Descriptor: Not Supported
00:11:36.393    SGL Metadata Pointer:      Not Supported
00:11:36.393    Oversized SGL:             Not Supported
00:11:36.393    SGL Metadata Address:      Not Supported
00:11:36.393    SGL Offset:                Not Supported
00:11:36.393    Transport SGL Data Block:  Not Supported
00:11:36.393  Replay Protected Memory Block:  Not Supported
00:11:36.393  
00:11:36.393  Firmware Slot Information
00:11:36.393  =========================
00:11:36.393  Active slot:                 1
00:11:36.393  Slot 1 Firmware Revision:    VDV10184
00:11:36.393  
00:11:36.393  
00:11:36.393  Commands Supported and Effects
00:11:36.393  ==============================
00:11:36.393  Admin Commands
00:11:36.393  --------------
00:11:36.393     Delete I/O Submission Queue (00h): Supported 
00:11:36.393     Create I/O Submission Queue (01h): Supported All-NS-Exclusive
00:11:36.393                    Get Log Page (02h): Supported 
00:11:36.393     Delete I/O Completion Queue (04h): Supported 
00:11:36.393     Create I/O Completion Queue (05h): Supported All-NS-Exclusive
00:11:36.393                        Identify (06h): Supported 
00:11:36.393                           Abort (08h): Supported 
00:11:36.393                    Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 
00:11:36.393                    Get Features (0Ah): Supported 
00:11:36.393      Asynchronous Event Request (0Ch): Supported 
00:11:36.393            Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive
00:11:36.393                 Firmware Commit (10h): Supported Ctrlr-Cap-Change 
00:11:36.393         Firmware Image Download (11h): Supported 
00:11:36.393            Namespace Attachment (15h): Supported Per-NS-Exclusive
00:11:36.393                      Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive
00:11:36.393                 Vendor specific (C8h): Supported 
00:11:36.393                 Vendor specific (D2h): Supported 
00:11:36.393                 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:11:36.393                 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:11:36.393  I/O Commands
00:11:36.393  ------------
00:11:36.393                           Flush (00h): Supported LBA-Change 
00:11:36.394                           Write (01h): Supported LBA-Change 
00:11:36.394                            Read (02h): Supported 
00:11:36.394             Write Uncorrectable (04h): Supported LBA-Change 
00:11:36.394              Dataset Management (09h): Supported LBA-Change 
00:11:36.394  
00:11:36.394  Error Log
00:11:36.394  =========
00:11:36.394  Entry: 0
00:11:36.394  Error Count:            0x970a
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 1
00:11:36.394  Error Count:            0x9709
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 2
00:11:36.394  Error Count:            0x9708
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 3
00:11:36.394  Error Count:            0x9707
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 4
00:11:36.394  Error Count:            0x9706
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 5
00:11:36.394  Error Count:            0x9705
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 6
00:11:36.394  Error Count:            0x9704
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 7
00:11:36.394  Error Count:            0x9703
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 8
00:11:36.394  Error Count:            0x9702
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 9
00:11:36.394  Error Count:            0x9701
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 10
00:11:36.394  Error Count:            0x9700
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 11
00:11:36.394  Error Count:            0x96ff
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 12
00:11:36.394  Error Count:            0x96fe
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 13
00:11:36.394  Error Count:            0x96fd
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 14
00:11:36.394  Error Count:            0x96fc
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 15
00:11:36.394  Error Count:            0x96fb
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 16
00:11:36.394  Error Count:            0x96fa
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 17
00:11:36.394  Error Count:            0x96f9
00:11:36.394  Submission Queue Id:    0x0
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.394  Status Code:            0x6
00:11:36.394  Status Code Type:       0x0
00:11:36.394  Do Not Retry:           1
00:11:36.394  Error Location:         0xffff
00:11:36.394  LBA:                    0x0
00:11:36.394  Namespace:              0xffffffff
00:11:36.394  Vendor Log Page:        0x0
00:11:36.394  -----------
00:11:36.394  Entry: 18
00:11:36.394  Error Count:            0x96f8
00:11:36.394  Submission Queue Id:    0x2
00:11:36.394  Command Id:             0xffff
00:11:36.394  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 19
00:11:36.395  Error Count:            0x96f7
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 20
00:11:36.395  Error Count:            0x96f6
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 21
00:11:36.395  Error Count:            0x96f5
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 22
00:11:36.395  Error Count:            0x96f4
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 23
00:11:36.395  Error Count:            0x96f3
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 24
00:11:36.395  Error Count:            0x96f2
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 25
00:11:36.395  Error Count:            0x96f1
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 26
00:11:36.395  Error Count:            0x96f0
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 27
00:11:36.395  Error Count:            0x96ef
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 28
00:11:36.395  Error Count:            0x96ee
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 29
00:11:36.395  Error Count:            0x96ed
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 30
00:11:36.395  Error Count:            0x96ec
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 31
00:11:36.395  Error Count:            0x96eb
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 32
00:11:36.395  Error Count:            0x96ea
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 33
00:11:36.395  Error Count:            0x96e9
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 34
00:11:36.395  Error Count:            0x96e8
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 35
00:11:36.395  Error Count:            0x96e7
00:11:36.395  Submission Queue Id:    0x0
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 36
00:11:36.395  Error Count:            0x96e6
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 37
00:11:36.395  Error Count:            0x96e5
00:11:36.395  Submission Queue Id:    0x2
00:11:36.395  Command Id:             0xffff
00:11:36.395  Phase Bit:              0
00:11:36.395  Status Code:            0x6
00:11:36.395  Status Code Type:       0x0
00:11:36.395  Do Not Retry:           1
00:11:36.395  Error Location:         0xffff
00:11:36.395  LBA:                    0x0
00:11:36.395  Namespace:              0xffffffff
00:11:36.395  Vendor Log Page:        0x0
00:11:36.395  -----------
00:11:36.395  Entry: 38
00:11:36.395  Error Count:            0x96e4
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 39
00:11:36.396  Error Count:            0x96e3
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 40
00:11:36.396  Error Count:            0x96e2
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 41
00:11:36.396  Error Count:            0x96e1
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 42
00:11:36.396  Error Count:            0x96e0
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 43
00:11:36.396  Error Count:            0x96df
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 44
00:11:36.396  Error Count:            0x96de
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 45
00:11:36.396  Error Count:            0x96dd
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 46
00:11:36.396  Error Count:            0x96dc
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 47
00:11:36.396  Error Count:            0x96db
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 48
00:11:36.396  Error Count:            0x96da
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 49
00:11:36.396  Error Count:            0x96d9
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 50
00:11:36.396  Error Count:            0x96d8
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 51
00:11:36.396  Error Count:            0x96d7
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 52
00:11:36.396  Error Count:            0x96d6
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 53
00:11:36.396  Error Count:            0x96d5
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 54
00:11:36.396  Error Count:            0x96d4
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 55
00:11:36.396  Error Count:            0x96d3
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 56
00:11:36.396  Error Count:            0x96d2
00:11:36.396  Submission Queue Id:    0x0
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.396  Namespace:              0xffffffff
00:11:36.396  Vendor Log Page:        0x0
00:11:36.396  -----------
00:11:36.396  Entry: 57
00:11:36.396  Error Count:            0x96d1
00:11:36.396  Submission Queue Id:    0x2
00:11:36.396  Command Id:             0xffff
00:11:36.396  Phase Bit:              0
00:11:36.396  Status Code:            0x6
00:11:36.396  Status Code Type:       0x0
00:11:36.396  Do Not Retry:           1
00:11:36.396  Error Location:         0xffff
00:11:36.396  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 58
00:11:36.397  Error Count:            0x96d0
00:11:36.397  Submission Queue Id:    0x2
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 59
00:11:36.397  Error Count:            0x96cf
00:11:36.397  Submission Queue Id:    0x0
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 60
00:11:36.397  Error Count:            0x96ce
00:11:36.397  Submission Queue Id:    0x2
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 61
00:11:36.397  Error Count:            0x96cd
00:11:36.397  Submission Queue Id:    0x2
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 62
00:11:36.397  Error Count:            0x96cc
00:11:36.397  Submission Queue Id:    0x0
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  -----------
00:11:36.397  Entry: 63
00:11:36.397  Error Count:            0x96cb
00:11:36.397  Submission Queue Id:    0x2
00:11:36.397  Command Id:             0xffff
00:11:36.397  Phase Bit:              0
00:11:36.397  Status Code:            0x6
00:11:36.397  Status Code Type:       0x0
00:11:36.397  Do Not Retry:           1
00:11:36.397  Error Location:         0xffff
00:11:36.397  LBA:                    0x0
00:11:36.397  Namespace:              0xffffffff
00:11:36.397  Vendor Log Page:        0x0
00:11:36.397  
00:11:36.397  Arbitration
00:11:36.397  ===========
00:11:36.397  Arbitration Burst:           1
00:11:36.397  Low Priority Weight:         1
00:11:36.397  Medium Priority Weight:      1
00:11:36.397  High Priority Weight:        1
00:11:36.397  
00:11:36.397  Power Management
00:11:36.397  ================
00:11:36.397  Number of Power States:          1
00:11:36.397  Current Power State:             Power State #0
00:11:36.397  Power State #0:
00:11:36.397    Max Power:                     20.00 W
00:11:36.397    Non-Operational State:         Operational
00:11:36.397    Entry Latency:                 Not Reported
00:11:36.397    Exit Latency:                  Not Reported
00:11:36.397    Relative Read Throughput:      0
00:11:36.397    Relative Read Latency:         0
00:11:36.397    Relative Write Throughput:     0
00:11:36.397    Relative Write Latency:        0
00:11:36.397    Idle Power:                     Not Reported
00:11:36.397    Active Power:                   Not Reported
00:11:36.397  Non-Operational Permissive Mode: Not Supported
00:11:36.397  
00:11:36.397  Health Information
00:11:36.397  ==================
00:11:36.397  Critical Warnings:
00:11:36.397    Available Spare Space:     OK
00:11:36.397    Temperature:               OK
00:11:36.397    Device Reliability:        OK
00:11:36.397    Read Only:                 No
00:11:36.397    Volatile Memory Backup:    OK
00:11:36.397  Current Temperature:         310 Kelvin (37 Celsius)
00:11:36.397  Temperature Threshold:       343 Kelvin (70 Celsius)
00:11:36.397  Available Spare:             99%
00:11:36.397  Available Spare Threshold:   10%
00:11:36.397  Life Percentage Used:        32%
00:11:36.397  Data Units Read:             628349442
00:11:36.397  Data Units Written:          790781043
00:11:36.397  Host Read Commands:          36984307828
00:11:36.397  Host Write Commands:         42949631884
00:11:36.397  Controller Busy Time:        3917 minutes
00:11:36.397  Power Cycles:                31
00:11:36.397  Power On Hours:              20842 hours
00:11:36.397  Unsafe Shutdowns:            46
00:11:36.397  Unrecoverable Media Errors:  0
00:11:36.397  Lifetime Error Log Entries:  38666
00:11:36.397  Warning Temperature Time:    2198 minutes
00:11:36.397  Critical Temperature Time:   0 minutes
00:11:36.397  
00:11:36.397  Number of Queues
00:11:36.397  ================
00:11:36.397  Number of I/O Submission Queues:      128
00:11:36.397  Number of I/O Completion Queues:      128
00:11:36.397  
00:11:36.397  Intel Health Information
00:11:36.397  ==================
00:11:36.397  Program Fail Count:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 6
00:11:36.397  Erase Fail Count:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 1
00:11:36.397  Wear Leveling Count:
00:11:36.397    Normalized Value : 65
00:11:36.397    Current Raw Value:
00:11:36.397    Min: 308
00:11:36.397    Max: 1772
00:11:36.397    Avg: 1520
00:11:36.397  End to End Error Detection Count:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 0
00:11:36.397  CRC Error Count:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 0
00:11:36.397  Timed Workload, Media Wear:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 65535
00:11:36.397  Timed Workload, Host Read/Write Ratio:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 65535%
00:11:36.397  Timed Workload, Timer:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 65535
00:11:36.397  Thermal Throttle Status:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value:
00:11:36.397    Percentage: 0%
00:11:36.397    Throttling Event Count: 1
00:11:36.397  Retry Buffer Overflow Counter:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 0
00:11:36.397  PLL Lock Loss Count:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 0
00:11:36.397  NAND Bytes Written:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 104435359
00:11:36.397  Host Bytes Written:
00:11:36.397    Normalized Value : 100
00:11:36.397    Current Raw Value: 12066361
00:11:36.397  
00:11:36.397  Intel Temperature Information
00:11:36.397  ==================
00:11:36.397  Current Temperature: 37
00:11:36.397  Overtemp shutdown Flag for last critical component temperature: 0
00:11:36.397  Overtemp shutdown Flag for life critical component temperature: 0
00:11:36.397  Highest temperature: 73
00:11:36.397  Lowest temperature: 21
00:11:36.397  Specified Maximum Operating Temperature: 70
00:11:36.397  Specified Minimum Operating Temperature: 0
00:11:36.397  Estimated offset: 0
00:11:36.397  
00:11:36.397  
00:11:36.397  Intel Marketing Information
00:11:36.397  ==================
00:11:36.397  Marketing Product Information:		Intel(R) SSD DC P4510   Series
00:11:36.397  
00:11:36.397  
00:11:36.397  Active Namespaces
00:11:36.397  =================
00:11:36.397  Namespace ID:1
00:11:36.397  Error Recovery Timeout:                Unlimited
00:11:36.397  Command Set Identifier:                NVM (00h)
00:11:36.397  Deallocate:                            Supported
00:11:36.397  Deallocated/Unwritten Error:           Not Supported
00:11:36.397  Deallocated Read Value:                All 0x00
00:11:36.397  Deallocate in Write Zeroes:            Not Supported
00:11:36.397  Deallocated Guard Field:               0xFFFF
00:11:36.397  Flush:                                 Not Supported
00:11:36.397  Reservation:                           Not Supported
00:11:36.397  Namespace Sharing Capabilities:        Private
00:11:36.397  Size (in LBAs):                        7814037168 (3726GiB)
00:11:36.397  Capacity (in LBAs):                    7814037168 (3726GiB)
00:11:36.398  Utilization (in LBAs):                 7814037168 (3726GiB)
00:11:36.398  NGUID:                                 010000009F6E00000000000000000000
00:11:36.398  EUI64:                                 0000000000009F6E
00:11:36.398  Thin Provisioning:                     Not Supported
00:11:36.398  Per-NS Atomic Units:                   No
00:11:36.398  NGUID/EUI64 Never Reused:              No
00:11:36.398  Namespace Write Protected:             No
00:11:36.398  Number of LBA Formats:                 2
00:11:36.398  Current LBA Format:                    LBA Format #00
00:11:36.398  LBA Format #00: Data Size:   512  Metadata Size:     0
00:11:36.398  LBA Format #01: Data Size:  4096  Metadata Size:     0
00:11:36.398  
00:11:36.398   10:49:24	-- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:11:36.398   10:49:24	-- nvme/nvme.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0
00:11:36.398  =====================================================
00:11:36.398  NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:11:36.398  =====================================================
00:11:36.398  Controller Capabilities/Features
00:11:36.398  ================================
00:11:36.398  Vendor ID:                             8086
00:11:36.398  Subsystem Vendor ID:                   8086
00:11:36.398  Serial Number:                         BTLJ83030AK84P0DGN
00:11:36.398  Model Number:                          INTEL SSDPE2KX040T8
00:11:36.398  Firmware Version:                      VDV10184
00:11:36.398  Recommended Arb Burst:                 0
00:11:36.398  IEEE OUI Identifier:                   e4 d2 5c
00:11:36.398  Multi-path I/O
00:11:36.398    May have multiple subsystem ports:   No
00:11:36.398    May have multiple controllers:       No
00:11:36.398    Associated with SR-IOV VF:           No
00:11:36.398  Max Data Transfer Size:                131072
00:11:36.398  Max Number of Namespaces:              128
00:11:36.398  Max Number of I/O Queues:              128
00:11:36.398  NVMe Specification Version (VS):       1.2
00:11:36.398  NVMe Specification Version (Identify): 1.2
00:11:36.398  Maximum Queue Entries:                 4096
00:11:36.398  Contiguous Queues Required:            Yes
00:11:36.398  Arbitration Mechanisms Supported
00:11:36.398    Weighted Round Robin:                Supported
00:11:36.398    Vendor Specific:                     Not Supported
00:11:36.398  Reset Timeout:                         60000 ms
00:11:36.398  Doorbell Stride:                       4 bytes
00:11:36.398  NVM Subsystem Reset:                   Not Supported
00:11:36.398  Command Sets Supported
00:11:36.398    NVM Command Set:                     Supported
00:11:36.398  Boot Partition:                        Not Supported
00:11:36.398  Memory Page Size Minimum:              4096 bytes
00:11:36.398  Memory Page Size Maximum:              4096 bytes
00:11:36.398  Persistent Memory Region:              Not Supported
00:11:36.398  Optional Asynchronous Events Supported
00:11:36.398    Namespace Attribute Notices:         Not Supported
00:11:36.398    Firmware Activation Notices:         Supported
00:11:36.398    ANA Change Notices:                  Not Supported
00:11:36.398    PLE Aggregate Log Change Notices:    Not Supported
00:11:36.398    LBA Status Info Alert Notices:       Not Supported
00:11:36.398    EGE Aggregate Log Change Notices:    Not Supported
00:11:36.398    Normal NVM Subsystem Shutdown event: Not Supported
00:11:36.398    Zone Descriptor Change Notices:      Not Supported
00:11:36.398    Discovery Log Change Notices:        Not Supported
00:11:36.398  Controller Attributes
00:11:36.398    128-bit Host Identifier:             Not Supported
00:11:36.398    Non-Operational Permissive Mode:     Not Supported
00:11:36.398    NVM Sets:                            Not Supported
00:11:36.398    Read Recovery Levels:                Not Supported
00:11:36.398    Endurance Groups:                    Not Supported
00:11:36.398    Predictable Latency Mode:            Not Supported
00:11:36.398    Traffic Based Keep ALive:            Not Supported
00:11:36.398    Namespace Granularity:               Not Supported
00:11:36.398    SQ Associations:                     Not Supported
00:11:36.398    UUID List:                           Not Supported
00:11:36.398    Multi-Domain Subsystem:              Not Supported
00:11:36.398    Fixed Capacity Management:           Not Supported
00:11:36.398    Variable Capacity Management:        Not Supported
00:11:36.398    Delete Endurance Group:              Not Supported
00:11:36.398    Delete NVM Set:                      Not Supported
00:11:36.398    Extended LBA Formats Supported:      Not Supported
00:11:36.398    Flexible Data Placement Supported:   Not Supported
00:11:36.398  
00:11:36.398  Controller Memory Buffer Support
00:11:36.398  ================================
00:11:36.398  Supported:                             No
00:11:36.398  
00:11:36.398  Persistent Memory Region Support
00:11:36.398  ================================
00:11:36.398  Supported:                             No
00:11:36.398  
00:11:36.398  Admin Command Set Attributes
00:11:36.398  ============================
00:11:36.398  Security Send/Receive:                 Not Supported
00:11:36.398  Format NVM:                            Supported
00:11:36.398  Firmware Activate/Download:            Supported
00:11:36.398  Namespace Management:                  Supported
00:11:36.398  Device Self-Test:                      Not Supported
00:11:36.398  Directives:                            Not Supported
00:11:36.398  NVMe-MI:                               Not Supported
00:11:36.398  Virtualization Management:             Not Supported
00:11:36.398  Doorbell Buffer Config:                Not Supported
00:11:36.398  Get LBA Status Capability:             Not Supported
00:11:36.398  Command & Feature Lockdown Capability: Not Supported
00:11:36.398  Abort Command Limit:                   4
00:11:36.398  Async Event Request Limit:             4
00:11:36.398  Number of Firmware Slots:              4
00:11:36.398  Firmware Slot 1 Read-Only:             No
00:11:36.398  Firmware Activation Without Reset:     Yes
00:11:36.398  Multiple Update Detection Support:     No
00:11:36.398  Firmware Update Granularity:           No Information Provided
00:11:36.398  Per-Namespace SMART Log:               No
00:11:36.398  Asymmetric Namespace Access Log Page:  Not Supported
00:11:36.398  Subsystem NQN:                         
00:11:36.398  Command Effects Log Page:              Supported
00:11:36.398  Get Log Page Extended Data:            Supported
00:11:36.398  Telemetry Log Pages:                   Supported
00:11:36.398  Persistent Event Log Pages:            Not Supported
00:11:36.398  Supported Log Pages Log Page:          May Support
00:11:36.398  Commands Supported & Effects Log Page: Not Supported
00:11:36.398  Feature Identifiers & Effects Log Page:May Support
00:11:36.398  NVMe-MI Commands & Effects Log Page:   May Support
00:11:36.398  Data Area 4 for Telemetry Log:         Not Supported
00:11:36.398  Error Log Page Entries Supported:      64
00:11:36.398  Keep Alive:                            Not Supported
00:11:36.398  
00:11:36.398  NVM Command Set Attributes
00:11:36.398  ==========================
00:11:36.398  Submission Queue Entry Size
00:11:36.398    Max:                       64
00:11:36.398    Min:                       64
00:11:36.398  Completion Queue Entry Size
00:11:36.398    Max:                       16
00:11:36.398    Min:                       16
00:11:36.398  Number of Namespaces:        128
00:11:36.398  Compare Command:             Not Supported
00:11:36.398  Write Uncorrectable Command: Supported
00:11:36.398  Dataset Management Command:  Supported
00:11:36.398  Write Zeroes Command:        Not Supported
00:11:36.398  Set Features Save Field:     Not Supported
00:11:36.398  Reservations:                Not Supported
00:11:36.398  Timestamp:                   Not Supported
00:11:36.398  Copy:                        Not Supported
00:11:36.398  Volatile Write Cache:        Not Present
00:11:36.398  Atomic Write Unit (Normal):  1
00:11:36.398  Atomic Write Unit (PFail):   1
00:11:36.398  Atomic Compare & Write Unit: 1
00:11:36.398  Fused Compare & Write:       Not Supported
00:11:36.398  Scatter-Gather List
00:11:36.398    SGL Command Set:           Not Supported
00:11:36.398    SGL Keyed:                 Not Supported
00:11:36.398    SGL Bit Bucket Descriptor: Not Supported
00:11:36.398    SGL Metadata Pointer:      Not Supported
00:11:36.398    Oversized SGL:             Not Supported
00:11:36.398    SGL Metadata Address:      Not Supported
00:11:36.398    SGL Offset:                Not Supported
00:11:36.398    Transport SGL Data Block:  Not Supported
00:11:36.398  Replay Protected Memory Block:  Not Supported
00:11:36.398  
00:11:36.398  Firmware Slot Information
00:11:36.398  =========================
00:11:36.398  Active slot:                 1
00:11:36.398  Slot 1 Firmware Revision:    VDV10184
00:11:36.398  
00:11:36.398  
00:11:36.398  Commands Supported and Effects
00:11:36.398  ==============================
00:11:36.398  Admin Commands
00:11:36.398  --------------
00:11:36.398     Delete I/O Submission Queue (00h): Supported 
00:11:36.398     Create I/O Submission Queue (01h): Supported All-NS-Exclusive
00:11:36.398                    Get Log Page (02h): Supported 
00:11:36.398     Delete I/O Completion Queue (04h): Supported 
00:11:36.398     Create I/O Completion Queue (05h): Supported All-NS-Exclusive
00:11:36.398                        Identify (06h): Supported 
00:11:36.398                           Abort (08h): Supported 
00:11:36.398                    Set Features (09h): Supported NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change 
00:11:36.398                    Get Features (0Ah): Supported 
00:11:36.398      Asynchronous Event Request (0Ch): Supported 
00:11:36.398            Namespace Management (0Dh): Supported LBA-Change NS-Cap-Change Per-NS-Exclusive
00:11:36.398                 Firmware Commit (10h): Supported Ctrlr-Cap-Change 
00:11:36.398         Firmware Image Download (11h): Supported 
00:11:36.398            Namespace Attachment (15h): Supported Per-NS-Exclusive
00:11:36.398                      Format NVM (80h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change Ctrlr-Cap-Change Per-NS-Exclusive
00:11:36.398                 Vendor specific (C8h): Supported 
00:11:36.398                 Vendor specific (D2h): Supported 
00:11:36.398                 Vendor specific (E1h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:11:36.398                 Vendor specific (E2h): Supported LBA-Change NS-Cap-Change NS-Inventory-Change All-NS-Exclusive
00:11:36.398  I/O Commands
00:11:36.398  ------------
00:11:36.398                           Flush (00h): Supported LBA-Change 
00:11:36.398                           Write (01h): Supported LBA-Change 
00:11:36.398                            Read (02h): Supported 
00:11:36.398             Write Uncorrectable (04h): Supported LBA-Change 
00:11:36.398              Dataset Management (09h): Supported LBA-Change 
00:11:36.398  
00:11:36.398  Error Log
00:11:36.398  =========
00:11:36.398  Entry: 0
00:11:36.398  Error Count:            0x970a
00:11:36.398  Submission Queue Id:    0x2
00:11:36.398  Command Id:             0xffff
00:11:36.398  Phase Bit:              0
00:11:36.398  Status Code:            0x6
00:11:36.398  Status Code Type:       0x0
00:11:36.398  Do Not Retry:           1
00:11:36.398  Error Location:         0xffff
00:11:36.398  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 1
00:11:36.399  Error Count:            0x9709
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 2
00:11:36.399  Error Count:            0x9708
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 3
00:11:36.399  Error Count:            0x9707
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 4
00:11:36.399  Error Count:            0x9706
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 5
00:11:36.399  Error Count:            0x9705
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 6
00:11:36.399  Error Count:            0x9704
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 7
00:11:36.399  Error Count:            0x9703
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 8
00:11:36.399  Error Count:            0x9702
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 9
00:11:36.399  Error Count:            0x9701
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 10
00:11:36.399  Error Count:            0x9700
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 11
00:11:36.399  Error Count:            0x96ff
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 12
00:11:36.399  Error Count:            0x96fe
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 13
00:11:36.399  Error Count:            0x96fd
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 14
00:11:36.399  Error Count:            0x96fc
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 15
00:11:36.399  Error Count:            0x96fb
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 16
00:11:36.399  Error Count:            0x96fa
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 17
00:11:36.399  Error Count:            0x96f9
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 18
00:11:36.399  Error Count:            0x96f8
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 19
00:11:36.399  Error Count:            0x96f7
00:11:36.399  Submission Queue Id:    0x2
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.399  Do Not Retry:           1
00:11:36.399  Error Location:         0xffff
00:11:36.399  LBA:                    0x0
00:11:36.399  Namespace:              0xffffffff
00:11:36.399  Vendor Log Page:        0x0
00:11:36.399  -----------
00:11:36.399  Entry: 20
00:11:36.399  Error Count:            0x96f6
00:11:36.399  Submission Queue Id:    0x0
00:11:36.399  Command Id:             0xffff
00:11:36.399  Phase Bit:              0
00:11:36.399  Status Code:            0x6
00:11:36.399  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 21
00:11:36.400  Error Count:            0x96f5
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 22
00:11:36.400  Error Count:            0x96f4
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 23
00:11:36.400  Error Count:            0x96f3
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 24
00:11:36.400  Error Count:            0x96f2
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 25
00:11:36.400  Error Count:            0x96f1
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 26
00:11:36.400  Error Count:            0x96f0
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 27
00:11:36.400  Error Count:            0x96ef
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 28
00:11:36.400  Error Count:            0x96ee
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 29
00:11:36.400  Error Count:            0x96ed
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 30
00:11:36.400  Error Count:            0x96ec
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 31
00:11:36.400  Error Count:            0x96eb
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 32
00:11:36.400  Error Count:            0x96ea
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 33
00:11:36.400  Error Count:            0x96e9
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 34
00:11:36.400  Error Count:            0x96e8
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 35
00:11:36.400  Error Count:            0x96e7
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 36
00:11:36.400  Error Count:            0x96e6
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 37
00:11:36.400  Error Count:            0x96e5
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 38
00:11:36.400  Error Count:            0x96e4
00:11:36.400  Submission Queue Id:    0x0
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 39
00:11:36.400  Error Count:            0x96e3
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.400  Error Location:         0xffff
00:11:36.400  LBA:                    0x0
00:11:36.400  Namespace:              0xffffffff
00:11:36.400  Vendor Log Page:        0x0
00:11:36.400  -----------
00:11:36.400  Entry: 40
00:11:36.400  Error Count:            0x96e2
00:11:36.400  Submission Queue Id:    0x2
00:11:36.400  Command Id:             0xffff
00:11:36.400  Phase Bit:              0
00:11:36.400  Status Code:            0x6
00:11:36.400  Status Code Type:       0x0
00:11:36.400  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 41
00:11:36.401  Error Count:            0x96e1
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 42
00:11:36.401  Error Count:            0x96e0
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 43
00:11:36.401  Error Count:            0x96df
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 44
00:11:36.401  Error Count:            0x96de
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 45
00:11:36.401  Error Count:            0x96dd
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 46
00:11:36.401  Error Count:            0x96dc
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 47
00:11:36.401  Error Count:            0x96db
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 48
00:11:36.401  Error Count:            0x96da
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 49
00:11:36.401  Error Count:            0x96d9
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 50
00:11:36.401  Error Count:            0x96d8
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 51
00:11:36.401  Error Count:            0x96d7
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 52
00:11:36.401  Error Count:            0x96d6
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 53
00:11:36.401  Error Count:            0x96d5
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 54
00:11:36.401  Error Count:            0x96d4
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 55
00:11:36.401  Error Count:            0x96d3
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 56
00:11:36.401  Error Count:            0x96d2
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 57
00:11:36.401  Error Count:            0x96d1
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 58
00:11:36.401  Error Count:            0x96d0
00:11:36.401  Submission Queue Id:    0x2
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 59
00:11:36.401  Error Count:            0x96cf
00:11:36.401  Submission Queue Id:    0x0
00:11:36.401  Command Id:             0xffff
00:11:36.401  Phase Bit:              0
00:11:36.401  Status Code:            0x6
00:11:36.401  Status Code Type:       0x0
00:11:36.401  Do Not Retry:           1
00:11:36.401  Error Location:         0xffff
00:11:36.401  LBA:                    0x0
00:11:36.401  Namespace:              0xffffffff
00:11:36.401  Vendor Log Page:        0x0
00:11:36.401  -----------
00:11:36.401  Entry: 60
00:11:36.401  Error Count:            0x96ce
00:11:36.401  Submission Queue Id:    0x2
00:11:36.402  Command Id:             0xffff
00:11:36.402  Phase Bit:              0
00:11:36.402  Status Code:            0x6
00:11:36.402  Status Code Type:       0x0
00:11:36.402  Do Not Retry:           1
00:11:36.402  Error Location:         0xffff
00:11:36.402  LBA:                    0x0
00:11:36.402  Namespace:              0xffffffff
00:11:36.402  Vendor Log Page:        0x0
00:11:36.402  -----------
00:11:36.402  Entry: 61
00:11:36.402  Error Count:            0x96cd
00:11:36.402  Submission Queue Id:    0x2
00:11:36.402  Command Id:             0xffff
00:11:36.402  Phase Bit:              0
00:11:36.402  Status Code:            0x6
00:11:36.402  Status Code Type:       0x0
00:11:36.402  Do Not Retry:           1
00:11:36.402  Error Location:         0xffff
00:11:36.402  LBA:                    0x0
00:11:36.402  Namespace:              0xffffffff
00:11:36.402  Vendor Log Page:        0x0
00:11:36.402  -----------
00:11:36.402  Entry: 62
00:11:36.402  Error Count:            0x96cc
00:11:36.402  Submission Queue Id:    0x0
00:11:36.402  Command Id:             0xffff
00:11:36.402  Phase Bit:              0
00:11:36.402  Status Code:            0x6
00:11:36.402  Status Code Type:       0x0
00:11:36.402  Do Not Retry:           1
00:11:36.402  Error Location:         0xffff
00:11:36.402  LBA:                    0x0
00:11:36.402  Namespace:              0xffffffff
00:11:36.402  Vendor Log Page:        0x0
00:11:36.402  -----------
00:11:36.402  Entry: 63
00:11:36.402  Error Count:            0x96cb
00:11:36.402  Submission Queue Id:    0x2
00:11:36.402  Command Id:             0xffff
00:11:36.402  Phase Bit:              0
00:11:36.402  Status Code:            0x6
00:11:36.402  Status Code Type:       0x0
00:11:36.402  Do Not Retry:           1
00:11:36.402  Error Location:         0xffff
00:11:36.402  LBA:                    0x0
00:11:36.402  Namespace:              0xffffffff
00:11:36.402  Vendor Log Page:        0x0
00:11:36.402  
00:11:36.402  Arbitration
00:11:36.402  ===========
00:11:36.402  Arbitration Burst:           1
00:11:36.402  Low Priority Weight:         1
00:11:36.402  Medium Priority Weight:      1
00:11:36.402  High Priority Weight:        1
00:11:36.402  
00:11:36.402  Power Management
00:11:36.402  ================
00:11:36.402  Number of Power States:          1
00:11:36.402  Current Power State:             Power State #0
00:11:36.402  Power State #0:
00:11:36.402    Max Power:                     20.00 W
00:11:36.402    Non-Operational State:         Operational
00:11:36.402    Entry Latency:                 Not Reported
00:11:36.402    Exit Latency:                  Not Reported
00:11:36.402    Relative Read Throughput:      0
00:11:36.402    Relative Read Latency:         0
00:11:36.402    Relative Write Throughput:     0
00:11:36.402    Relative Write Latency:        0
00:11:36.402    Idle Power:                     Not Reported
00:11:36.402    Active Power:                   Not Reported
00:11:36.402  Non-Operational Permissive Mode: Not Supported
00:11:36.402  
00:11:36.402  Health Information
00:11:36.402  ==================
00:11:36.402  Critical Warnings:
00:11:36.402    Available Spare Space:     OK
00:11:36.402    Temperature:               OK
00:11:36.402    Device Reliability:        OK
00:11:36.402    Read Only:                 No
00:11:36.402    Volatile Memory Backup:    OK
00:11:36.402  Current Temperature:         310 Kelvin (37 Celsius)
00:11:36.402  Temperature Threshold:       343 Kelvin (70 Celsius)
00:11:36.402  Available Spare:             99%
00:11:36.402  Available Spare Threshold:   10%
00:11:36.402  Life Percentage Used:        32%
00:11:36.402  Data Units Read:             628349442
00:11:36.402  Data Units Written:          790781043
00:11:36.402  Host Read Commands:          36984307828
00:11:36.402  Host Write Commands:         42949631884
00:11:36.402  Controller Busy Time:        3917 minutes
00:11:36.402  Power Cycles:                31
00:11:36.402  Power On Hours:              20842 hours
00:11:36.402  Unsafe Shutdowns:            46
00:11:36.402  Unrecoverable Media Errors:  0
00:11:36.402  Lifetime Error Log Entries:  38666
00:11:36.402  Warning Temperature Time:    2198 minutes
00:11:36.402  Critical Temperature Time:   0 minutes
00:11:36.402  
00:11:36.402  Number of Queues
00:11:36.402  ================
00:11:36.402  Number of I/O Submission Queues:      128
00:11:36.402  Number of I/O Completion Queues:      128
00:11:36.402  
00:11:36.402  Intel Health Information
00:11:36.402  ==================
00:11:36.402  Program Fail Count:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 6
00:11:36.402  Erase Fail Count:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 1
00:11:36.402  Wear Leveling Count:
00:11:36.402    Normalized Value : 65
00:11:36.402    Current Raw Value:
00:11:36.402    Min: 308
00:11:36.402    Max: 1772
00:11:36.402    Avg: 1520
00:11:36.402  End to End Error Detection Count:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 0
00:11:36.402  CRC Error Count:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 0
00:11:36.402  Timed Workload, Media Wear:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 65535
00:11:36.402  Timed Workload, Host Read/Write Ratio:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 65535%
00:11:36.402  Timed Workload, Timer:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 65535
00:11:36.402  Thermal Throttle Status:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value:
00:11:36.402    Percentage: 0%
00:11:36.402    Throttling Event Count: 1
00:11:36.402  Retry Buffer Overflow Counter:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 0
00:11:36.402  PLL Lock Loss Count:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 0
00:11:36.402  NAND Bytes Written:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 104435359
00:11:36.402  Host Bytes Written:
00:11:36.402    Normalized Value : 100
00:11:36.402    Current Raw Value: 12066361
00:11:36.402  
00:11:36.402  Intel Temperature Information
00:11:36.402  ==================
00:11:36.402  Current Temperature: 37
00:11:36.402  Overtemp shutdown Flag for last critical component temperature: 0
00:11:36.402  Overtemp shutdown Flag for life critical component temperature: 0
00:11:36.402  Highest temperature: 73
00:11:36.402  Lowest temperature: 21
00:11:36.402  Specified Maximum Operating Temperature: 70
00:11:36.402  Specified Minimum Operating Temperature: 0
00:11:36.402  Estimated offset: 0
00:11:36.402  
00:11:36.402  
00:11:36.402  Intel Marketing Information
00:11:36.402  ==================
00:11:36.402  Marketing Product Information:		Intel(R) SSD DC P4510   Series
00:11:36.402  
00:11:36.402  
00:11:36.402  Active Namespaces
00:11:36.402  =================
00:11:36.402  Namespace ID:1
00:11:36.402  Error Recovery Timeout:                Unlimited
00:11:36.402  Command Set Identifier:                NVM (00h)
00:11:36.402  Deallocate:                            Supported
00:11:36.402  Deallocated/Unwritten Error:           Not Supported
00:11:36.402  Deallocated Read Value:                All 0x00
00:11:36.402  Deallocate in Write Zeroes:            Not Supported
00:11:36.402  Deallocated Guard Field:               0xFFFF
00:11:36.402  Flush:                                 Not Supported
00:11:36.402  Reservation:                           Not Supported
00:11:36.402  Namespace Sharing Capabilities:        Private
00:11:36.402  Size (in LBAs):                        7814037168 (3726GiB)
00:11:36.402  Capacity (in LBAs):                    7814037168 (3726GiB)
00:11:36.402  Utilization (in LBAs):                 7814037168 (3726GiB)
00:11:36.402  NGUID:                                 010000009F6E00000000000000000000
00:11:36.402  EUI64:                                 0000000000009F6E
00:11:36.402  Thin Provisioning:                     Not Supported
00:11:36.402  Per-NS Atomic Units:                   No
00:11:36.402  NGUID/EUI64 Never Reused:              No
00:11:36.402  Namespace Write Protected:             No
00:11:36.402  Number of LBA Formats:                 2
00:11:36.402  Current LBA Format:                    LBA Format #00
00:11:36.402  LBA Format #00: Data Size:   512  Metadata Size:     0
00:11:36.402  LBA Format #01: Data Size:  4096  Metadata Size:     0
00:11:36.403  
00:11:36.403  
00:11:36.403  real	0m0.769s
00:11:36.403  user	0m0.255s
00:11:36.403  sys	0m0.419s
00:11:36.403   10:49:25	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:36.403   10:49:25	-- common/autotest_common.sh@10 -- # set +x
00:11:36.403  ************************************
00:11:36.403  END TEST nvme_identify
00:11:36.403  ************************************
00:11:36.403   10:49:25	-- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:11:36.403   10:49:25	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:36.403   10:49:25	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:36.403   10:49:25	-- common/autotest_common.sh@10 -- # set +x
00:11:36.403  ************************************
00:11:36.403  START TEST nvme_perf
00:11:36.403  ************************************
00:11:36.403   10:49:25	-- common/autotest_common.sh@1114 -- # nvme_perf
00:11:36.403   10:49:25	-- nvme/nvme.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:11:37.785  Initializing NVMe Controllers
00:11:37.785  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:11:37.785  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:11:37.785  Initialization complete. Launching workers.
00:11:37.785  ========================================================
00:11:37.785                                                                             Latency(us)
00:11:37.785  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:37.785  PCIE (0000:5e:00.0) NSID 1 from core  0:  103193.87    1209.30    1239.51      76.24    3260.65
00:11:37.785  ========================================================
00:11:37.785  Total                                  :  103193.87    1209.30    1239.51      76.24    3260.65
00:11:37.785  
00:11:37.785  Summary latency data for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:11:37.785  =================================================================================
00:11:37.785    1.00000% :   223.499us
00:11:37.785   10.00000% :   537.823us
00:11:37.785   25.00000% :   808.515us
00:11:37.785   50.00000% :  1232.362us
00:11:37.785   75.00000% :  1652.647us
00:11:37.785   90.00000% :  1966.080us
00:11:37.785   95.00000% :  2122.797us
00:11:37.785   98.00000% :  2308.007us
00:11:37.785   99.00000% :  2450.477us
00:11:37.785   99.50000% :  2564.452us
00:11:37.785   99.90000% :  2778.157us
00:11:37.785   99.99000% :  3006.108us
00:11:37.785   99.99900% :  3248.306us
00:11:37.785   99.99990% :  3262.553us
00:11:37.785   99.99999% :  3262.553us
00:11:37.785  
00:11:37.785  Latency histogram for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:11:37.785  ==============================================================================
00:11:37.785         Range in us     Cumulative    IO count
00:11:37.785     76.132 -    76.577:    0.0010%  (        1)
00:11:37.785     77.023 -    77.468:    0.0019%  (        1)
00:11:37.785     79.249 -    79.694:    0.0029%  (        1)
00:11:37.785     80.584 -    81.030:    0.0039%  (        1)
00:11:37.785     81.475 -    81.920:    0.0048%  (        1)
00:11:37.785     83.256 -    83.701:    0.0058%  (        1)
00:11:37.785     85.037 -    85.482:    0.0078%  (        2)
00:11:37.785     85.482 -    85.927:    0.0087%  (        1)
00:11:37.785     86.817 -    87.263:    0.0107%  (        2)
00:11:37.785     88.598 -    89.043:    0.0116%  (        1)
00:11:37.785     89.489 -    89.934:    0.0155%  (        4)
00:11:37.785     90.379 -    90.824:    0.0165%  (        1)
00:11:37.785     90.824 -    91.270:    0.0174%  (        1)
00:11:37.785     91.270 -    91.715:    0.0184%  (        1)
00:11:37.785     92.160 -    92.605:    0.0194%  (        1)
00:11:37.785     92.605 -    93.050:    0.0203%  (        1)
00:11:37.785     93.050 -    93.496:    0.0213%  (        1)
00:11:37.785     93.496 -    93.941:    0.0223%  (        1)
00:11:37.785     94.831 -    95.277:    0.0233%  (        1)
00:11:37.785     95.277 -    95.722:    0.0262%  (        3)
00:11:37.785     96.167 -    96.612:    0.0271%  (        1)
00:11:37.785     96.612 -    97.057:    0.0281%  (        1)
00:11:37.785     97.057 -    97.503:    0.0300%  (        2)
00:11:37.785     97.503 -    97.948:    0.0310%  (        1)
00:11:37.785     99.283 -    99.729:    0.0329%  (        2)
00:11:37.785     99.729 -   100.174:    0.0339%  (        1)
00:11:37.785    100.619 -   101.064:    0.0359%  (        2)
00:11:37.785    101.510 -   101.955:    0.0368%  (        1)
00:11:37.785    101.955 -   102.400:    0.0388%  (        2)
00:11:37.785    102.400 -   102.845:    0.0407%  (        2)
00:11:37.785    103.736 -   104.181:    0.0417%  (        1)
00:11:37.785    104.181 -   104.626:    0.0446%  (        3)
00:11:37.785    104.626 -   105.071:    0.0465%  (        2)
00:11:37.785    105.071 -   105.517:    0.0475%  (        1)
00:11:37.785    106.852 -   107.297:    0.0494%  (        2)
00:11:37.785    107.297 -   107.743:    0.0504%  (        1)
00:11:37.785    107.743 -   108.188:    0.0514%  (        1)
00:11:37.785    108.188 -   108.633:    0.0543%  (        3)
00:11:37.785    108.633 -   109.078:    0.0552%  (        1)
00:11:37.785    109.078 -   109.523:    0.0562%  (        1)
00:11:37.785    109.523 -   109.969:    0.0581%  (        2)
00:11:37.785    110.414 -   110.859:    0.0601%  (        2)
00:11:37.785    111.304 -   111.750:    0.0610%  (        1)
00:11:37.785    112.640 -   113.085:    0.0620%  (        1)
00:11:37.785    113.530 -   113.976:    0.0630%  (        1)
00:11:37.785    113.976 -   114.866:    0.0649%  (        2)
00:11:37.785    114.866 -   115.757:    0.0669%  (        2)
00:11:37.785    115.757 -   116.647:    0.0698%  (        3)
00:11:37.785    116.647 -   117.537:    0.0717%  (        2)
00:11:37.785    117.537 -   118.428:    0.0756%  (        4)
00:11:37.785    118.428 -   119.318:    0.0766%  (        1)
00:11:37.785    119.318 -   120.209:    0.0785%  (        2)
00:11:37.785    120.209 -   121.099:    0.0804%  (        2)
00:11:37.785    121.099 -   121.990:    0.0824%  (        2)
00:11:37.786    121.990 -   122.880:    0.0853%  (        3)
00:11:37.786    122.880 -   123.770:    0.0921%  (        7)
00:11:37.786    123.770 -   124.661:    0.0979%  (        6)
00:11:37.786    124.661 -   125.551:    0.1066%  (        9)
00:11:37.786    125.551 -   126.442:    0.1143%  (        8)
00:11:37.786    126.442 -   127.332:    0.1153%  (        1)
00:11:37.786    127.332 -   128.223:    0.1202%  (        5)
00:11:37.786    128.223 -   129.113:    0.1231%  (        3)
00:11:37.786    129.113 -   130.003:    0.1279%  (        5)
00:11:37.786    130.003 -   130.894:    0.1298%  (        2)
00:11:37.786    130.894 -   131.784:    0.1376%  (        8)
00:11:37.786    131.784 -   132.675:    0.1415%  (        4)
00:11:37.786    132.675 -   133.565:    0.1483%  (        7)
00:11:37.786    133.565 -   134.456:    0.1560%  (        8)
00:11:37.786    134.456 -   135.346:    0.1609%  (        5)
00:11:37.786    135.346 -   136.237:    0.1667%  (        6)
00:11:37.786    136.237 -   137.127:    0.1793%  (       13)
00:11:37.786    137.127 -   138.017:    0.1831%  (        4)
00:11:37.786    138.017 -   138.908:    0.1928%  (       10)
00:11:37.786    138.908 -   139.798:    0.1957%  (        3)
00:11:37.786    139.798 -   140.689:    0.2045%  (        9)
00:11:37.786    140.689 -   141.579:    0.2064%  (        2)
00:11:37.786    141.579 -   142.470:    0.2151%  (        9)
00:11:37.786    142.470 -   143.360:    0.2200%  (        5)
00:11:37.786    143.360 -   144.250:    0.2258%  (        6)
00:11:37.786    144.250 -   145.141:    0.2345%  (        9)
00:11:37.786    145.141 -   146.031:    0.2403%  (        6)
00:11:37.786    146.031 -   146.922:    0.2481%  (        8)
00:11:37.786    146.922 -   147.812:    0.2510%  (        3)
00:11:37.786    147.812 -   148.703:    0.2636%  (       13)
00:11:37.786    148.703 -   149.593:    0.2655%  (        2)
00:11:37.786    149.593 -   150.483:    0.2723%  (        7)
00:11:37.786    150.483 -   151.374:    0.2781%  (        6)
00:11:37.786    151.374 -   152.264:    0.2878%  (       10)
00:11:37.786    152.264 -   153.155:    0.2917%  (        4)
00:11:37.786    153.155 -   154.045:    0.2994%  (        8)
00:11:37.786    154.045 -   154.936:    0.3062%  (        7)
00:11:37.786    154.936 -   155.826:    0.3091%  (        3)
00:11:37.786    155.826 -   156.717:    0.3149%  (        6)
00:11:37.786    156.717 -   157.607:    0.3227%  (        8)
00:11:37.786    157.607 -   158.497:    0.3295%  (        7)
00:11:37.786    158.497 -   159.388:    0.3362%  (        7)
00:11:37.786    159.388 -   160.278:    0.3440%  (        8)
00:11:37.786    160.278 -   161.169:    0.3508%  (        7)
00:11:37.786    161.169 -   162.059:    0.3585%  (        8)
00:11:37.786    162.059 -   162.950:    0.3653%  (        7)
00:11:37.786    162.950 -   163.840:    0.3682%  (        3)
00:11:37.786    163.840 -   164.730:    0.3769%  (        9)
00:11:37.786    164.730 -   165.621:    0.3847%  (        8)
00:11:37.786    165.621 -   166.511:    0.3915%  (        7)
00:11:37.786    166.511 -   167.402:    0.3954%  (        4)
00:11:37.786    167.402 -   168.292:    0.3992%  (        4)
00:11:37.786    168.292 -   169.183:    0.4012%  (        2)
00:11:37.786    169.183 -   170.073:    0.4060%  (        5)
00:11:37.786    170.073 -   170.963:    0.4128%  (        7)
00:11:37.786    170.963 -   171.854:    0.4186%  (        6)
00:11:37.786    171.854 -   172.744:    0.4312%  (       13)
00:11:37.786    172.744 -   173.635:    0.4390%  (        8)
00:11:37.786    173.635 -   174.525:    0.4448%  (        6)
00:11:37.786    174.525 -   175.416:    0.4506%  (        6)
00:11:37.786    175.416 -   176.306:    0.4593%  (        9)
00:11:37.786    176.306 -   177.197:    0.4709%  (       12)
00:11:37.786    177.197 -   178.087:    0.4777%  (        7)
00:11:37.786    178.087 -   178.977:    0.4864%  (        9)
00:11:37.786    178.977 -   179.868:    0.5000%  (       14)
00:11:37.786    179.868 -   180.758:    0.5049%  (        5)
00:11:37.786    180.758 -   181.649:    0.5145%  (       10)
00:11:37.786    181.649 -   182.539:    0.5242%  (       10)
00:11:37.786    182.539 -   183.430:    0.5368%  (       13)
00:11:37.786    183.430 -   184.320:    0.5407%  (        4)
00:11:37.786    184.320 -   185.210:    0.5504%  (       10)
00:11:37.786    185.210 -   186.101:    0.5523%  (        2)
00:11:37.786    186.101 -   186.991:    0.5659%  (       14)
00:11:37.786    186.991 -   187.882:    0.5737%  (        8)
00:11:37.786    187.882 -   188.772:    0.5872%  (       14)
00:11:37.786    188.772 -   189.663:    0.5979%  (       11)
00:11:37.786    189.663 -   190.553:    0.6085%  (       11)
00:11:37.786    190.553 -   191.443:    0.6105%  (        2)
00:11:37.786    191.443 -   192.334:    0.6202%  (       10)
00:11:37.786    192.334 -   193.224:    0.6260%  (        6)
00:11:37.786    193.224 -   194.115:    0.6337%  (        8)
00:11:37.786    194.115 -   195.005:    0.6454%  (       12)
00:11:37.786    195.005 -   195.896:    0.6609%  (       16)
00:11:37.786    195.896 -   196.786:    0.6667%  (        6)
00:11:37.786    196.786 -   197.677:    0.6744%  (        8)
00:11:37.786    197.677 -   198.567:    0.6812%  (        7)
00:11:37.786    198.567 -   199.457:    0.6967%  (       16)
00:11:37.786    199.457 -   200.348:    0.7113%  (       15)
00:11:37.786    200.348 -   201.238:    0.7200%  (        9)
00:11:37.786    201.238 -   202.129:    0.7306%  (       11)
00:11:37.786    202.129 -   203.019:    0.7384%  (        8)
00:11:37.786    203.019 -   203.910:    0.7500%  (       12)
00:11:37.786    203.910 -   204.800:    0.7568%  (        7)
00:11:37.786    204.800 -   205.690:    0.7645%  (        8)
00:11:37.786    205.690 -   206.581:    0.7762%  (       12)
00:11:37.786    206.581 -   207.471:    0.7917%  (       16)
00:11:37.786    207.471 -   208.362:    0.8014%  (       10)
00:11:37.786    208.362 -   209.252:    0.8169%  (       16)
00:11:37.786    209.252 -   210.143:    0.8304%  (       14)
00:11:37.786    210.143 -   211.033:    0.8411%  (       11)
00:11:37.786    211.033 -   211.923:    0.8537%  (       13)
00:11:37.786    211.923 -   212.814:    0.8605%  (        7)
00:11:37.786    212.814 -   213.704:    0.8750%  (       15)
00:11:37.786    213.704 -   214.595:    0.8857%  (       11)
00:11:37.786    214.595 -   215.485:    0.8944%  (        9)
00:11:37.786    215.485 -   216.376:    0.9099%  (       16)
00:11:37.786    216.376 -   217.266:    0.9293%  (       20)
00:11:37.786    217.266 -   218.157:    0.9390%  (       10)
00:11:37.786    218.157 -   219.047:    0.9477%  (        9)
00:11:37.786    219.047 -   219.937:    0.9584%  (       11)
00:11:37.786    219.937 -   220.828:    0.9690%  (       11)
00:11:37.786    220.828 -   221.718:    0.9845%  (       16)
00:11:37.786    221.718 -   222.609:    0.9913%  (        7)
00:11:37.786    222.609 -   223.499:    1.0020%  (       11)
00:11:37.786    223.499 -   224.390:    1.0175%  (       16)
00:11:37.786    224.390 -   225.280:    1.0262%  (        9)
00:11:37.786    225.280 -   226.170:    1.0378%  (       12)
00:11:37.786    226.170 -   227.061:    1.0533%  (       16)
00:11:37.786    227.061 -   227.951:    1.0640%  (       11)
00:11:37.786    227.951 -   229.732:    1.0882%  (       25)
00:11:37.786    229.732 -   231.513:    1.1192%  (       32)
00:11:37.786    231.513 -   233.294:    1.1483%  (       30)
00:11:37.786    233.294 -   235.075:    1.1725%  (       25)
00:11:37.786    235.075 -   236.856:    1.2016%  (       30)
00:11:37.786    236.856 -   238.637:    1.2248%  (       24)
00:11:37.786    238.637 -   240.417:    1.2452%  (       21)
00:11:37.786    240.417 -   242.198:    1.2723%  (       28)
00:11:37.786    242.198 -   243.979:    1.3033%  (       32)
00:11:37.786    243.979 -   245.760:    1.3295%  (       27)
00:11:37.786    245.760 -   247.541:    1.3489%  (       20)
00:11:37.786    247.541 -   249.322:    1.3731%  (       25)
00:11:37.786    249.322 -   251.103:    1.4031%  (       31)
00:11:37.786    251.103 -   252.883:    1.4274%  (       25)
00:11:37.786    252.883 -   254.664:    1.4555%  (       29)
00:11:37.786    254.664 -   256.445:    1.4865%  (       32)
00:11:37.786    256.445 -   258.226:    1.5146%  (       29)
00:11:37.786    258.226 -   260.007:    1.5427%  (       29)
00:11:37.786    260.007 -   261.788:    1.5669%  (       25)
00:11:37.786    261.788 -   263.569:    1.5872%  (       21)
00:11:37.786    263.569 -   265.350:    1.6182%  (       32)
00:11:37.786    265.350 -   267.130:    1.6512%  (       34)
00:11:37.786    267.130 -   268.911:    1.6938%  (       44)
00:11:37.786    268.911 -   270.692:    1.7171%  (       24)
00:11:37.786    270.692 -   272.473:    1.7413%  (       25)
00:11:37.786    272.473 -   274.254:    1.7665%  (       26)
00:11:37.786    274.254 -   276.035:    1.7956%  (       30)
00:11:37.786    276.035 -   277.816:    1.8324%  (       38)
00:11:37.786    277.816 -   279.597:    1.8731%  (       42)
00:11:37.786    279.597 -   281.377:    1.9090%  (       37)
00:11:37.786    281.377 -   283.158:    1.9361%  (       28)
00:11:37.786    283.158 -   284.939:    1.9555%  (       20)
00:11:37.786    284.939 -   286.720:    1.9816%  (       27)
00:11:37.786    286.720 -   288.501:    2.0165%  (       36)
00:11:37.786    288.501 -   290.282:    2.0514%  (       36)
00:11:37.786    290.282 -   292.063:    2.0931%  (       43)
00:11:37.786    292.063 -   293.843:    2.1309%  (       39)
00:11:37.786    293.843 -   295.624:    2.1716%  (       42)
00:11:37.786    295.624 -   297.405:    2.2045%  (       34)
00:11:37.786    297.405 -   299.186:    2.2345%  (       31)
00:11:37.786    299.186 -   300.967:    2.2820%  (       49)
00:11:37.786    300.967 -   302.748:    2.3218%  (       41)
00:11:37.786    302.748 -   304.529:    2.3547%  (       34)
00:11:37.786    304.529 -   306.310:    2.3876%  (       34)
00:11:37.786    306.310 -   308.090:    2.4274%  (       41)
00:11:37.786    308.090 -   309.871:    2.4642%  (       38)
00:11:37.786    309.871 -   311.652:    2.4942%  (       31)
00:11:37.786    311.652 -   313.433:    2.5349%  (       42)
00:11:37.786    313.433 -   315.214:    2.5805%  (       47)
00:11:37.786    315.214 -   316.995:    2.6047%  (       25)
00:11:37.786    316.995 -   318.776:    2.6493%  (       46)
00:11:37.786    318.776 -   320.557:    2.7006%  (       53)
00:11:37.786    320.557 -   322.337:    2.7384%  (       39)
00:11:37.786    322.337 -   324.118:    2.7714%  (       34)
00:11:37.786    324.118 -   325.899:    2.8130%  (       43)
00:11:37.786    325.899 -   327.680:    2.8508%  (       39)
00:11:37.786    327.680 -   329.461:    2.8857%  (       36)
00:11:37.786    329.461 -   331.242:    2.9196%  (       35)
00:11:37.786    331.242 -   333.023:    2.9681%  (       50)
00:11:37.786    333.023 -   334.803:    3.0175%  (       51)
00:11:37.786    334.803 -   336.584:    3.0630%  (       47)
00:11:37.786    336.584 -   338.365:    3.1086%  (       47)
00:11:37.786    338.365 -   340.146:    3.1590%  (       52)
00:11:37.786    340.146 -   341.927:    3.2161%  (       59)
00:11:37.786    341.927 -   343.708:    3.2656%  (       51)
00:11:37.786    343.708 -   345.489:    3.3072%  (       43)
00:11:37.786    345.489 -   347.270:    3.3693%  (       64)
00:11:37.786    347.270 -   349.050:    3.4225%  (       55)
00:11:37.786    349.050 -   350.831:    3.4662%  (       45)
00:11:37.786    350.831 -   352.612:    3.5127%  (       48)
00:11:37.786    352.612 -   354.393:    3.5650%  (       54)
00:11:37.786    354.393 -   356.174:    3.6134%  (       50)
00:11:37.786    356.174 -   357.955:    3.6696%  (       58)
00:11:37.787    357.955 -   359.736:    3.7181%  (       50)
00:11:37.787    359.736 -   361.517:    3.7675%  (       51)
00:11:37.787    361.517 -   363.297:    3.8305%  (       65)
00:11:37.787    363.297 -   365.078:    3.8857%  (       57)
00:11:37.787    365.078 -   366.859:    3.9293%  (       45)
00:11:37.787    366.859 -   368.640:    3.9817%  (       54)
00:11:37.787    368.640 -   370.421:    4.0253%  (       45)
00:11:37.787    370.421 -   372.202:    4.0873%  (       64)
00:11:37.787    372.202 -   373.983:    4.1367%  (       51)
00:11:37.787    373.983 -   375.763:    4.1832%  (       48)
00:11:37.787    375.763 -   377.544:    4.2317%  (       50)
00:11:37.787    377.544 -   379.325:    4.2859%  (       56)
00:11:37.787    379.325 -   381.106:    4.3295%  (       45)
00:11:37.787    381.106 -   382.887:    4.3838%  (       56)
00:11:37.787    382.887 -   384.668:    4.4390%  (       57)
00:11:37.787    384.668 -   386.449:    4.4962%  (       59)
00:11:37.787    386.449 -   388.230:    4.5456%  (       51)
00:11:37.787    388.230 -   390.010:    4.5892%  (       45)
00:11:37.787    390.010 -   391.791:    4.6580%  (       71)
00:11:37.787    391.791 -   393.572:    4.7036%  (       47)
00:11:37.787    393.572 -   395.353:    4.7511%  (       49)
00:11:37.787    395.353 -   397.134:    4.8111%  (       62)
00:11:37.787    397.134 -   398.915:    4.8712%  (       62)
00:11:37.787    398.915 -   400.696:    4.9216%  (       52)
00:11:37.787    400.696 -   402.477:    4.9759%  (       56)
00:11:37.787    402.477 -   404.257:    5.0360%  (       62)
00:11:37.787    404.257 -   406.038:    5.0922%  (       58)
00:11:37.787    406.038 -   407.819:    5.1454%  (       55)
00:11:37.787    407.819 -   409.600:    5.1910%  (       47)
00:11:37.787    409.600 -   411.381:    5.2404%  (       51)
00:11:37.787    411.381 -   413.162:    5.3044%  (       66)
00:11:37.787    413.162 -   414.943:    5.3557%  (       53)
00:11:37.787    414.943 -   416.723:    5.4051%  (       51)
00:11:37.787    416.723 -   418.504:    5.4623%  (       59)
00:11:37.787    418.504 -   420.285:    5.5175%  (       57)
00:11:37.787    420.285 -   422.066:    5.5825%  (       67)
00:11:37.787    422.066 -   423.847:    5.6241%  (       43)
00:11:37.787    423.847 -   425.628:    5.6949%  (       73)
00:11:37.787    425.628 -   427.409:    5.7520%  (       59)
00:11:37.787    427.409 -   429.190:    5.8208%  (       71)
00:11:37.787    429.190 -   430.970:    5.8809%  (       62)
00:11:37.787    430.970 -   432.751:    5.9429%  (       64)
00:11:37.787    432.751 -   434.532:    6.0098%  (       69)
00:11:37.787    434.532 -   436.313:    6.0912%  (       84)
00:11:37.787    436.313 -   438.094:    6.1387%  (       49)
00:11:37.787    438.094 -   439.875:    6.2181%  (       82)
00:11:37.787    439.875 -   441.656:    6.2821%  (       66)
00:11:37.787    441.656 -   443.437:    6.3461%  (       66)
00:11:37.787    443.437 -   445.217:    6.4197%  (       76)
00:11:37.787    445.217 -   446.998:    6.4769%  (       59)
00:11:37.787    446.998 -   448.779:    6.5282%  (       53)
00:11:37.787    448.779 -   450.560:    6.5970%  (       71)
00:11:37.787    450.560 -   452.341:    6.6678%  (       73)
00:11:37.787    452.341 -   454.122:    6.7288%  (       63)
00:11:37.787    454.122 -   455.903:    6.7947%  (       68)
00:11:37.787    455.903 -   459.464:    6.9527%  (      163)
00:11:37.787    459.464 -   463.026:    7.0912%  (      143)
00:11:37.787    463.026 -   466.588:    7.2182%  (      131)
00:11:37.787    466.588 -   470.150:    7.3461%  (      132)
00:11:37.787    470.150 -   473.711:    7.4682%  (      126)
00:11:37.787    473.711 -   477.273:    7.6019%  (      138)
00:11:37.787    477.273 -   480.835:    7.7414%  (      144)
00:11:37.787    480.835 -   484.397:    7.8674%  (      130)
00:11:37.787    484.397 -   487.958:    7.9750%  (      111)
00:11:37.787    487.958 -   491.520:    8.1252%  (      155)
00:11:37.787    491.520 -   495.082:    8.2705%  (      150)
00:11:37.787    495.082 -   498.643:    8.4139%  (      148)
00:11:37.787    498.643 -   502.205:    8.5728%  (      164)
00:11:37.787    502.205 -   505.767:    8.7289%  (      161)
00:11:37.787    505.767 -   509.329:    8.8955%  (      172)
00:11:37.787    509.329 -   512.890:    9.0409%  (      150)
00:11:37.787    512.890 -   516.452:    9.1911%  (      155)
00:11:37.787    516.452 -   520.014:    9.3490%  (      163)
00:11:37.787    520.014 -   523.576:    9.4982%  (      154)
00:11:37.787    523.576 -   527.137:    9.6513%  (      158)
00:11:37.787    527.137 -   530.699:    9.7996%  (      153)
00:11:37.787    530.699 -   534.261:    9.9333%  (      138)
00:11:37.787    534.261 -   537.823:   10.0816%  (      153)
00:11:37.787    537.823 -   541.384:   10.2095%  (      132)
00:11:37.787    541.384 -   544.946:   10.3878%  (      184)
00:11:37.787    544.946 -   548.508:   10.5157%  (      132)
00:11:37.787    548.508 -   552.070:   10.7008%  (      191)
00:11:37.787    552.070 -   555.631:   10.8549%  (      159)
00:11:37.787    555.631 -   559.193:   11.0206%  (      171)
00:11:37.787    559.193 -   562.755:   11.1649%  (      149)
00:11:37.787    562.755 -   566.317:   11.3306%  (      171)
00:11:37.787    566.317 -   569.878:   11.4963%  (      171)
00:11:37.787    569.878 -   573.440:   11.6863%  (      196)
00:11:37.787    573.440 -   577.002:   11.8471%  (      166)
00:11:37.787    577.002 -   580.563:   12.0167%  (      175)
00:11:37.787    580.563 -   584.125:   12.1911%  (      180)
00:11:37.787    584.125 -   587.687:   12.3530%  (      167)
00:11:37.787    587.687 -   591.249:   12.5041%  (      156)
00:11:37.787    591.249 -   594.810:   12.6814%  (      183)
00:11:37.787    594.810 -   598.372:   12.8530%  (      177)
00:11:37.787    598.372 -   601.934:   13.0128%  (      165)
00:11:37.787    601.934 -   605.496:   13.1863%  (      179)
00:11:37.787    605.496 -   609.057:   13.3646%  (      184)
00:11:37.787    609.057 -   612.619:   13.5381%  (      179)
00:11:37.787    612.619 -   616.181:   13.7193%  (      187)
00:11:37.787    616.181 -   619.743:   13.9034%  (      190)
00:11:37.787    619.743 -   623.304:   14.0875%  (      190)
00:11:37.787    623.304 -   626.866:   14.2522%  (      170)
00:11:37.787    626.866 -   630.428:   14.4557%  (      210)
00:11:37.787    630.428 -   633.990:   14.6514%  (      202)
00:11:37.787    633.990 -   637.551:   14.8414%  (      196)
00:11:37.787    637.551 -   641.113:   15.0274%  (      192)
00:11:37.787    641.113 -   644.675:   15.2144%  (      193)
00:11:37.787    644.675 -   648.237:   15.4112%  (      203)
00:11:37.787    648.237 -   651.798:   15.6185%  (      214)
00:11:37.787    651.798 -   655.360:   15.8094%  (      197)
00:11:37.787    655.360 -   658.922:   16.0187%  (      216)
00:11:37.787    658.922 -   662.483:   16.2125%  (      200)
00:11:37.787    662.483 -   666.045:   16.3986%  (      192)
00:11:37.787    666.045 -   669.607:   16.5817%  (      189)
00:11:37.787    669.607 -   673.169:   16.7978%  (      223)
00:11:37.787    673.169 -   676.730:   16.9945%  (      203)
00:11:37.787    676.730 -   680.292:   17.1951%  (      207)
00:11:37.787    680.292 -   683.854:   17.3976%  (      209)
00:11:37.787    683.854 -   687.416:   17.6186%  (      228)
00:11:37.787    687.416 -   690.977:   17.8744%  (      264)
00:11:37.787    690.977 -   694.539:   18.0730%  (      205)
00:11:37.787    694.539 -   698.101:   18.2872%  (      221)
00:11:37.787    698.101 -   701.663:   18.5120%  (      232)
00:11:37.787    701.663 -   705.224:   18.7242%  (      219)
00:11:37.787    705.224 -   708.786:   18.9296%  (      212)
00:11:37.787    708.786 -   712.348:   19.1389%  (      216)
00:11:37.787    712.348 -   715.910:   19.3512%  (      219)
00:11:37.787    715.910 -   719.471:   19.5469%  (      202)
00:11:37.787    719.471 -   723.033:   19.7678%  (      228)
00:11:37.787    723.033 -   726.595:   19.9820%  (      221)
00:11:37.787    726.595 -   730.157:   20.1777%  (      202)
00:11:37.787    730.157 -   733.718:   20.3938%  (      223)
00:11:37.787    733.718 -   737.280:   20.6147%  (      228)
00:11:37.787    737.280 -   740.842:   20.8531%  (      246)
00:11:37.787    740.842 -   744.403:   21.0818%  (      236)
00:11:37.787    744.403 -   747.965:   21.2872%  (      212)
00:11:37.787    747.965 -   751.527:   21.5101%  (      230)
00:11:37.787    751.527 -   755.089:   21.7233%  (      220)
00:11:37.787    755.089 -   758.650:   21.9152%  (      198)
00:11:37.787    758.650 -   762.212:   22.1564%  (      249)
00:11:37.787    762.212 -   765.774:   22.3890%  (      240)
00:11:37.787    765.774 -   769.336:   22.6206%  (      239)
00:11:37.787    769.336 -   772.897:   22.8367%  (      223)
00:11:37.787    772.897 -   776.459:   23.0625%  (      233)
00:11:37.787    776.459 -   780.021:   23.2660%  (      210)
00:11:37.787    780.021 -   783.583:   23.5014%  (      243)
00:11:37.787    783.583 -   787.144:   23.7262%  (      232)
00:11:37.787    787.144 -   790.706:   23.9278%  (      208)
00:11:37.787    790.706 -   794.268:   24.1458%  (      225)
00:11:37.787    794.268 -   797.830:   24.3726%  (      234)
00:11:37.787    797.830 -   801.391:   24.5809%  (      215)
00:11:37.787    801.391 -   804.953:   24.8173%  (      244)
00:11:37.787    804.953 -   808.515:   25.0354%  (      225)
00:11:37.787    808.515 -   812.077:   25.2834%  (      256)
00:11:37.787    812.077 -   815.638:   25.5305%  (      255)
00:11:37.787    815.638 -   819.200:   25.7679%  (      245)
00:11:37.787    819.200 -   822.762:   25.9869%  (      226)
00:11:37.787    822.762 -   826.323:   26.2117%  (      232)
00:11:37.787    826.323 -   829.885:   26.4036%  (      198)
00:11:37.787    829.885 -   833.447:   26.5945%  (      197)
00:11:37.787    833.447 -   837.009:   26.8271%  (      240)
00:11:37.787    837.009 -   840.570:   27.0587%  (      239)
00:11:37.787    840.570 -   844.132:   27.2767%  (      225)
00:11:37.787    844.132 -   847.694:   27.4850%  (      215)
00:11:37.787    847.694 -   851.256:   27.6662%  (      187)
00:11:37.787    851.256 -   854.817:   27.8794%  (      220)
00:11:37.787    854.817 -   858.379:   28.1071%  (      235)
00:11:37.787    858.379 -   861.941:   28.3281%  (      228)
00:11:37.787    861.941 -   865.503:   28.5577%  (      237)
00:11:37.787    865.503 -   869.064:   28.7670%  (      216)
00:11:37.787    869.064 -   872.626:   28.9802%  (      220)
00:11:37.787    872.626 -   876.188:   29.1924%  (      219)
00:11:37.787    876.188 -   879.750:   29.4008%  (      215)
00:11:37.787    879.750 -   883.311:   29.6149%  (      221)
00:11:37.787    883.311 -   886.873:   29.8223%  (      214)
00:11:37.787    886.873 -   890.435:   30.0694%  (      255)
00:11:37.787    890.435 -   893.997:   30.2855%  (      223)
00:11:37.787    893.997 -   897.558:   30.5268%  (      249)
00:11:37.787    897.558 -   901.120:   30.7273%  (      207)
00:11:37.787    901.120 -   904.682:   30.9454%  (      225)
00:11:37.787    904.682 -   908.243:   31.1595%  (      221)
00:11:37.787    908.243 -   911.805:   31.3775%  (      225)
00:11:37.787    911.805 -   918.929:   31.7971%  (      433)
00:11:37.787    918.929 -   926.052:   32.2051%  (      421)
00:11:37.787    926.052 -   933.176:   32.6876%  (      498)
00:11:37.787    933.176 -   940.299:   33.1460%  (      473)
00:11:37.787    940.299 -   947.423:   33.6043%  (      473)
00:11:37.787    947.423 -   954.546:   34.0646%  (      475)
00:11:37.787    954.546 -   961.670:   34.4764%  (      425)
00:11:37.787    961.670 -   968.793:   34.9232%  (      461)
00:11:37.787    968.793 -   975.917:   35.4028%  (      495)
00:11:37.787    975.917 -   983.040:   35.8515%  (      463)
00:11:37.787    983.040 -   990.163:   36.2594%  (      421)
00:11:37.788    990.163 -   997.287:   36.6887%  (      443)
00:11:37.788    997.287 -  1004.410:   37.0879%  (      412)
00:11:37.788   1004.410 -  1011.534:   37.4949%  (      420)
00:11:37.788   1011.534 -  1018.657:   37.9310%  (      450)
00:11:37.788   1018.657 -  1025.781:   38.3583%  (      441)
00:11:37.788   1025.781 -  1032.904:   38.7827%  (      438)
00:11:37.788   1032.904 -  1040.028:   39.1791%  (      409)
00:11:37.788   1040.028 -  1047.151:   39.5899%  (      424)
00:11:37.788   1047.151 -  1054.275:   40.0076%  (      431)
00:11:37.788   1054.275 -  1061.398:   40.4320%  (      438)
00:11:37.788   1061.398 -  1068.522:   40.8622%  (      444)
00:11:37.788   1068.522 -  1075.645:   41.2634%  (      414)
00:11:37.788   1075.645 -  1082.769:   41.6704%  (      420)
00:11:37.788   1082.769 -  1089.892:   42.0367%  (      378)
00:11:37.788   1089.892 -  1097.016:   42.4698%  (      447)
00:11:37.788   1097.016 -  1104.139:   42.8419%  (      384)
00:11:37.788   1104.139 -  1111.263:   43.2034%  (      373)
00:11:37.788   1111.263 -  1118.386:   43.6113%  (      421)
00:11:37.788   1118.386 -  1125.510:   43.9708%  (      371)
00:11:37.788   1125.510 -  1132.633:   44.3894%  (      432)
00:11:37.788   1132.633 -  1139.757:   44.7906%  (      414)
00:11:37.788   1139.757 -  1146.880:   45.2276%  (      451)
00:11:37.788   1146.880 -  1154.003:   45.6133%  (      398)
00:11:37.788   1154.003 -  1161.127:   46.0125%  (      412)
00:11:37.788   1161.127 -  1168.250:   46.3972%  (      397)
00:11:37.788   1168.250 -  1175.374:   46.8207%  (      437)
00:11:37.788   1175.374 -  1182.497:   47.2780%  (      472)
00:11:37.788   1182.497 -  1189.621:   47.6879%  (      423)
00:11:37.788   1189.621 -  1196.744:   48.0998%  (      425)
00:11:37.788   1196.744 -  1203.868:   48.5465%  (      461)
00:11:37.788   1203.868 -  1210.991:   48.9854%  (      453)
00:11:37.788   1210.991 -  1218.115:   49.3905%  (      418)
00:11:37.788   1218.115 -  1225.238:   49.8101%  (      433)
00:11:37.788   1225.238 -  1232.362:   50.2258%  (      429)
00:11:37.788   1232.362 -  1239.485:   50.6512%  (      439)
00:11:37.788   1239.485 -  1246.609:   51.0727%  (      435)
00:11:37.788   1246.609 -  1253.732:   51.5049%  (      446)
00:11:37.788   1253.732 -  1260.856:   51.9303%  (      439)
00:11:37.788   1260.856 -  1267.979:   52.3595%  (      443)
00:11:37.788   1267.979 -  1275.103:   52.7568%  (      410)
00:11:37.788   1275.103 -  1282.226:   53.1880%  (      445)
00:11:37.788   1282.226 -  1289.350:   53.5979%  (      423)
00:11:37.788   1289.350 -  1296.473:   54.0175%  (      433)
00:11:37.788   1296.473 -  1303.597:   54.4071%  (      402)
00:11:37.788   1303.597 -  1310.720:   54.7937%  (      399)
00:11:37.788   1310.720 -  1317.843:   55.2065%  (      426)
00:11:37.788   1317.843 -  1324.967:   55.6203%  (      427)
00:11:37.788   1324.967 -  1332.090:   56.0689%  (      463)
00:11:37.788   1332.090 -  1339.214:   56.4807%  (      425)
00:11:37.788   1339.214 -  1346.337:   56.9275%  (      461)
00:11:37.788   1346.337 -  1353.461:   57.3441%  (      430)
00:11:37.788   1353.461 -  1360.584:   57.7618%  (      431)
00:11:37.788   1360.584 -  1367.708:   58.1794%  (      431)
00:11:37.788   1367.708 -  1374.831:   58.5874%  (      421)
00:11:37.788   1374.831 -  1381.955:   58.9905%  (      416)
00:11:37.788   1381.955 -  1389.078:   59.4246%  (      448)
00:11:37.788   1389.078 -  1396.202:   59.8548%  (      444)
00:11:37.788   1396.202 -  1403.325:   60.2628%  (      421)
00:11:37.788   1403.325 -  1410.449:   60.6678%  (      418)
00:11:37.788   1410.449 -  1417.572:   61.1272%  (      474)
00:11:37.788   1417.572 -  1424.696:   61.5496%  (      436)
00:11:37.788   1424.696 -  1431.819:   61.9993%  (      464)
00:11:37.788   1431.819 -  1438.943:   62.4469%  (      462)
00:11:37.788   1438.943 -  1446.066:   62.8094%  (      374)
00:11:37.788   1446.066 -  1453.190:   63.2328%  (      437)
00:11:37.788   1453.190 -  1460.313:   63.6476%  (      428)
00:11:37.788   1460.313 -  1467.437:   64.0604%  (      426)
00:11:37.788   1467.437 -  1474.560:   64.4925%  (      446)
00:11:37.788   1474.560 -  1481.683:   64.9392%  (      461)
00:11:37.788   1481.683 -  1488.807:   65.3734%  (      448)
00:11:37.788   1488.807 -  1495.930:   65.8114%  (      452)
00:11:37.788   1495.930 -  1503.054:   66.2503%  (      453)
00:11:37.788   1503.054 -  1510.177:   66.6864%  (      450)
00:11:37.788   1510.177 -  1517.301:   67.1447%  (      473)
00:11:37.788   1517.301 -  1524.424:   67.5972%  (      467)
00:11:37.788   1524.424 -  1531.548:   68.0062%  (      422)
00:11:37.788   1531.548 -  1538.671:   68.4432%  (      451)
00:11:37.788   1538.671 -  1545.795:   68.9103%  (      482)
00:11:37.788   1545.795 -  1552.918:   69.3376%  (      441)
00:11:37.788   1552.918 -  1560.042:   69.7543%  (      430)
00:11:37.788   1560.042 -  1567.165:   70.1642%  (      423)
00:11:37.788   1567.165 -  1574.289:   70.5925%  (      442)
00:11:37.788   1574.289 -  1581.412:   70.9965%  (      417)
00:11:37.788   1581.412 -  1588.536:   71.4432%  (      461)
00:11:37.788   1588.536 -  1595.659:   71.8570%  (      427)
00:11:37.788   1595.659 -  1602.783:   72.2698%  (      426)
00:11:37.788   1602.783 -  1609.906:   72.6681%  (      411)
00:11:37.788   1609.906 -  1617.030:   73.1284%  (      475)
00:11:37.788   1617.030 -  1624.153:   73.5886%  (      475)
00:11:37.788   1624.153 -  1631.277:   74.0121%  (      437)
00:11:37.788   1631.277 -  1638.400:   74.4210%  (      422)
00:11:37.788   1638.400 -  1645.523:   74.8493%  (      442)
00:11:37.788   1645.523 -  1652.647:   75.2854%  (      450)
00:11:37.788   1652.647 -  1659.770:   75.7156%  (      444)
00:11:37.788   1659.770 -  1666.894:   76.1245%  (      422)
00:11:37.788   1666.894 -  1674.017:   76.5315%  (      420)
00:11:37.788   1674.017 -  1681.141:   76.9308%  (      412)
00:11:37.788   1681.141 -  1688.264:   77.3368%  (      419)
00:11:37.788   1688.264 -  1695.388:   77.7573%  (      434)
00:11:37.788   1695.388 -  1702.511:   78.1701%  (      426)
00:11:37.788   1702.511 -  1709.635:   78.5500%  (      392)
00:11:37.788   1709.635 -  1716.758:   78.9250%  (      387)
00:11:37.788   1716.758 -  1723.882:   79.3339%  (      422)
00:11:37.788   1723.882 -  1731.005:   79.7448%  (      424)
00:11:37.788   1731.005 -  1738.129:   80.1033%  (      370)
00:11:37.788   1738.129 -  1745.252:   80.4618%  (      370)
00:11:37.788   1745.252 -  1752.376:   80.8175%  (      367)
00:11:37.788   1752.376 -  1759.499:   81.1867%  (      381)
00:11:37.788   1759.499 -  1766.623:   81.5946%  (      421)
00:11:37.788   1766.623 -  1773.746:   81.9919%  (      410)
00:11:37.788   1773.746 -  1780.870:   82.3533%  (      373)
00:11:37.788   1780.870 -  1787.993:   82.7254%  (      384)
00:11:37.788   1787.993 -  1795.117:   83.0636%  (      349)
00:11:37.788   1795.117 -  1802.240:   83.4251%  (      373)
00:11:37.788   1802.240 -  1809.363:   83.7826%  (      369)
00:11:37.788   1809.363 -  1816.487:   84.1576%  (      387)
00:11:37.788   1816.487 -  1823.610:   84.4919%  (      345)
00:11:37.788   1823.610 -  1837.857:   85.1925%  (      723)
00:11:37.788   1837.857 -  1852.104:   85.8563%  (      685)
00:11:37.788   1852.104 -  1866.351:   86.5085%  (      673)
00:11:37.788   1866.351 -  1880.598:   87.1257%  (      637)
00:11:37.788   1880.598 -  1894.845:   87.7469%  (      641)
00:11:37.788   1894.845 -  1909.092:   88.3205%  (      592)
00:11:37.788   1909.092 -  1923.339:   88.9281%  (      627)
00:11:37.788   1923.339 -  1937.586:   89.4446%  (      533)
00:11:37.788   1937.586 -  1951.833:   89.9698%  (      542)
00:11:37.788   1951.833 -  1966.080:   90.5018%  (      549)
00:11:37.788   1966.080 -  1980.327:   91.0366%  (      552)
00:11:37.788   1980.327 -  1994.574:   91.5270%  (      506)
00:11:37.788   1994.574 -  2008.821:   91.9853%  (      473)
00:11:37.788   2008.821 -  2023.068:   92.4311%  (      460)
00:11:37.788   2023.068 -  2037.315:   92.8322%  (      414)
00:11:37.788   2037.315 -  2051.562:   93.2537%  (      435)
00:11:37.788   2051.562 -  2065.809:   93.6520%  (      411)
00:11:37.788   2065.809 -  2080.056:   94.0406%  (      401)
00:11:37.788   2080.056 -  2094.303:   94.3904%  (      361)
00:11:37.788   2094.303 -  2108.550:   94.7315%  (      352)
00:11:37.788   2108.550 -  2122.797:   95.0464%  (      325)
00:11:37.788   2122.797 -  2137.043:   95.3613%  (      325)
00:11:37.788   2137.043 -  2151.290:   95.6588%  (      307)
00:11:37.788   2151.290 -  2165.537:   95.9389%  (      289)
00:11:37.788   2165.537 -  2179.784:   96.2054%  (      275)
00:11:37.788   2179.784 -  2194.031:   96.4554%  (      258)
00:11:37.788   2194.031 -  2208.278:   96.7170%  (      270)
00:11:37.788   2208.278 -  2222.525:   96.9505%  (      241)
00:11:37.788   2222.525 -  2236.772:   97.1540%  (      210)
00:11:37.788   2236.772 -  2251.019:   97.3624%  (      215)
00:11:37.788   2251.019 -  2265.266:   97.5571%  (      201)
00:11:37.788   2265.266 -  2279.513:   97.7219%  (      170)
00:11:37.788   2279.513 -  2293.760:   97.8798%  (      163)
00:11:37.788   2293.760 -  2308.007:   98.0281%  (      153)
00:11:37.788   2308.007 -  2322.254:   98.1482%  (      124)
00:11:37.788   2322.254 -  2336.501:   98.2974%  (      154)
00:11:37.788   2336.501 -  2350.748:   98.4205%  (      127)
00:11:37.788   2350.748 -  2364.995:   98.5319%  (      115)
00:11:37.788   2364.995 -  2379.242:   98.6347%  (      106)
00:11:37.788   2379.242 -  2393.489:   98.7432%  (      112)
00:11:37.788   2393.489 -  2407.736:   98.8362%  (       96)
00:11:37.788   2407.736 -  2421.983:   98.9137%  (       80)
00:11:37.788   2421.983 -  2436.230:   98.9864%  (       75)
00:11:37.788   2436.230 -  2450.477:   99.0659%  (       82)
00:11:37.788   2450.477 -  2464.723:   99.1289%  (       65)
00:11:37.788   2464.723 -  2478.970:   99.1957%  (       69)
00:11:37.788   2478.970 -  2493.217:   99.2471%  (       53)
00:11:37.788   2493.217 -  2507.464:   99.3178%  (       73)
00:11:37.788   2507.464 -  2521.711:   99.3721%  (       56)
00:11:37.788   2521.711 -  2535.958:   99.4186%  (       48)
00:11:37.788   2535.958 -  2550.205:   99.4680%  (       51)
00:11:37.788   2550.205 -  2564.452:   99.5077%  (       41)
00:11:37.788   2564.452 -  2578.699:   99.5610%  (       55)
00:11:37.788   2578.699 -  2592.946:   99.5998%  (       40)
00:11:37.788   2592.946 -  2607.193:   99.6386%  (       40)
00:11:37.788   2607.193 -  2621.440:   99.6676%  (       30)
00:11:37.788   2621.440 -  2635.687:   99.7074%  (       41)
00:11:37.788   2635.687 -  2649.934:   99.7326%  (       26)
00:11:37.788   2649.934 -  2664.181:   99.7577%  (       26)
00:11:37.788   2664.181 -  2678.428:   99.7800%  (       23)
00:11:37.788   2678.428 -  2692.675:   99.8072%  (       28)
00:11:37.788   2692.675 -  2706.922:   99.8236%  (       17)
00:11:37.788   2706.922 -  2721.169:   99.8450%  (       22)
00:11:37.788   2721.169 -  2735.416:   99.8576%  (       13)
00:11:37.788   2735.416 -  2749.663:   99.8702%  (       13)
00:11:37.788   2749.663 -  2763.910:   99.8857%  (       16)
00:11:37.788   2763.910 -  2778.157:   99.9012%  (       16)
00:11:37.788   2778.157 -  2792.403:   99.9089%  (        8)
00:11:37.788   2792.403 -  2806.650:   99.9205%  (       12)
00:11:37.788   2806.650 -  2820.897:   99.9283%  (        8)
00:11:37.788   2820.897 -  2835.144:   99.9390%  (       11)
00:11:37.788   2835.144 -  2849.391:   99.9477%  (        9)
00:11:37.788   2849.391 -  2863.638:   99.9525%  (        5)
00:11:37.788   2863.638 -  2877.885:   99.9554%  (        3)
00:11:37.788   2877.885 -  2892.132:   99.9622%  (        7)
00:11:37.788   2892.132 -  2906.379:   99.9671%  (        5)
00:11:37.788   2906.379 -  2920.626:   99.9729%  (        6)
00:11:37.789   2920.626 -  2934.873:   99.9787%  (        6)
00:11:37.789   2934.873 -  2949.120:   99.9797%  (        1)
00:11:37.789   2949.120 -  2963.367:   99.9826%  (        3)
00:11:37.789   2963.367 -  2977.614:   99.9864%  (        4)
00:11:37.789   2977.614 -  2991.861:   99.9884%  (        2)
00:11:37.789   2991.861 -  3006.108:   99.9922%  (        4)
00:11:37.789   3034.602 -  3048.849:   99.9942%  (        2)
00:11:37.789   3048.849 -  3063.096:   99.9952%  (        1)
00:11:37.789   3091.590 -  3105.837:   99.9961%  (        1)
00:11:37.789   3105.837 -  3120.083:   99.9971%  (        1)
00:11:37.789   3191.318 -  3205.565:   99.9981%  (        1)
00:11:37.789   3234.059 -  3248.306:   99.9990%  (        1)
00:11:37.789   3248.306 -  3262.553:  100.0000%  (        1)
00:11:37.789  
00:11:37.789   10:49:26	-- nvme/nvme.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:11:39.171  Initializing NVMe Controllers
00:11:39.171  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:11:39.171  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:11:39.171  Initialization complete. Launching workers.
00:11:39.171  ========================================================
00:11:39.171                                                                             Latency(us)
00:11:39.171  Device Information                     :       IOPS      MiB/s    Average        min        max
00:11:39.171  PCIE (0000:5e:00.0) NSID 1 from core  0:  128465.59    1505.46     995.50      41.58    2206.57
00:11:39.171  ========================================================
00:11:39.171  Total                                  :  128465.59    1505.46     995.50      41.58    2206.57
00:11:39.171  
00:11:39.171  Summary latency data for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:11:39.171  =================================================================================
00:11:39.171    1.00000% :   926.052us
00:11:39.171   10.00000% :   947.423us
00:11:39.171   25.00000% :   975.917us
00:11:39.171   50.00000% :   997.287us
00:11:39.171   75.00000% :  1018.657us
00:11:39.171   90.00000% :  1040.028us
00:11:39.171   95.00000% :  1054.275us
00:11:39.171   98.00000% :  1061.398us
00:11:39.171   99.00000% :  1068.522us
00:11:39.171   99.50000% :  1082.769us
00:11:39.171   99.90000% :  1866.351us
00:11:39.171   99.99000% :  2122.797us
00:11:39.171   99.99900% :  2208.278us
00:11:39.171   99.99990% :  2208.278us
00:11:39.171   99.99999% :  2208.278us
00:11:39.171  
00:11:39.171  Latency histogram for PCIE (0000:5e:00.0) NSID 1                  from core 0:
00:11:39.171  ==============================================================================
00:11:39.171         Range in us     Cumulative    IO count
00:11:39.171     41.405 -    41.628:    0.0016%  (        2)
00:11:39.171     41.628 -    41.850:    0.0023%  (        1)
00:11:39.171     42.073 -    42.296:    0.0031%  (        1)
00:11:39.171     43.854 -    44.077:    0.0047%  (        2)
00:11:39.171    111.750 -   112.195:    0.0054%  (        1)
00:11:39.171    112.195 -   112.640:    0.0070%  (        2)
00:11:39.171    117.537 -   118.428:    0.0078%  (        1)
00:11:39.171    125.551 -   126.442:    0.0101%  (        3)
00:11:39.171    126.442 -   127.332:    0.0117%  (        2)
00:11:39.171    129.113 -   130.003:    0.0148%  (        4)
00:11:39.171    130.003 -   130.894:    0.0156%  (        1)
00:11:39.171    211.923 -   212.814:    0.0163%  (        1)
00:11:39.171    212.814 -   213.704:    0.0202%  (        5)
00:11:39.171    214.595 -   215.485:    0.0210%  (        1)
00:11:39.171    215.485 -   216.376:    0.0233%  (        3)
00:11:39.171    306.310 -   308.090:    0.0272%  (        5)
00:11:39.171    311.652 -   313.433:    0.0288%  (        2)
00:11:39.171    313.433 -   315.214:    0.0311%  (        3)
00:11:39.171    391.791 -   393.572:    0.0319%  (        1)
00:11:39.171    395.353 -   397.134:    0.0389%  (        9)
00:11:39.171    448.779 -   450.560:    0.0420%  (        4)
00:11:39.171    473.711 -   477.273:    0.0451%  (        4)
00:11:39.171    477.273 -   480.835:    0.0459%  (        1)
00:11:39.171    480.835 -   484.397:    0.0498%  (        5)
00:11:39.171    505.767 -   509.329:    0.0506%  (        1)
00:11:39.171    509.329 -   512.890:    0.0514%  (        1)
00:11:39.171    512.890 -   516.452:    0.0521%  (        1)
00:11:39.171    516.452 -   520.014:    0.0529%  (        1)
00:11:39.171    520.014 -   523.576:    0.0537%  (        1)
00:11:39.171    523.576 -   527.137:    0.0584%  (        6)
00:11:39.171    530.699 -   534.261:    0.0622%  (        5)
00:11:39.171    534.261 -   537.823:    0.0661%  (        5)
00:11:39.171    548.508 -   552.070:    0.0669%  (        1)
00:11:39.171    552.070 -   555.631:    0.0692%  (        3)
00:11:39.171    555.631 -   559.193:    0.0739%  (        6)
00:11:39.171    569.878 -   573.440:    0.0762%  (        3)
00:11:39.171    573.440 -   577.002:    0.0786%  (        3)
00:11:39.171    577.002 -   580.563:    0.0817%  (        4)
00:11:39.171    601.934 -   605.496:    0.0825%  (        1)
00:11:39.171    605.496 -   609.057:    0.0864%  (        5)
00:11:39.171    609.057 -   612.619:    0.0903%  (        5)
00:11:39.171    616.181 -   619.743:    0.0918%  (        2)
00:11:39.171    619.743 -   623.304:    0.0988%  (        9)
00:11:39.171    630.428 -   633.990:    0.1011%  (        3)
00:11:39.171    633.990 -   637.551:    0.1058%  (        6)
00:11:39.171    637.551 -   641.113:    0.1066%  (        1)
00:11:39.171    655.360 -   658.922:    0.1089%  (        3)
00:11:39.171    658.922 -   662.483:    0.1144%  (        7)
00:11:39.171    687.416 -   690.977:    0.1222%  (       10)
00:11:39.171    698.101 -   701.663:    0.1237%  (        2)
00:11:39.171    701.663 -   705.224:    0.1284%  (        6)
00:11:39.171    705.224 -   708.786:    0.1299%  (        2)
00:11:39.171    708.786 -   712.348:    0.1307%  (        1)
00:11:39.171    733.718 -   737.280:    0.1323%  (        2)
00:11:39.171    737.280 -   740.842:    0.1385%  (        8)
00:11:39.171    740.842 -   744.403:    0.1400%  (        2)
00:11:39.171    744.403 -   747.965:    0.1447%  (        6)
00:11:39.171    747.965 -   751.527:    0.1463%  (        2)
00:11:39.171    772.897 -   776.459:    0.1471%  (        1)
00:11:39.171    776.459 -   780.021:    0.1502%  (        4)
00:11:39.171    780.021 -   783.583:    0.1517%  (        2)
00:11:39.171    783.583 -   787.144:    0.1548%  (        4)
00:11:39.171    794.268 -   797.830:    0.1579%  (        4)
00:11:39.171    797.830 -   801.391:    0.1611%  (        4)
00:11:39.171    801.391 -   804.953:    0.1626%  (        2)
00:11:39.171    815.638 -   819.200:    0.1642%  (        2)
00:11:39.171    819.200 -   822.762:    0.1649%  (        1)
00:11:39.171    822.762 -   826.323:    0.1704%  (        7)
00:11:39.171    833.447 -   837.009:    0.1712%  (        1)
00:11:39.171    837.009 -   840.570:    0.1719%  (        1)
00:11:39.171    840.570 -   844.132:    0.1790%  (        9)
00:11:39.171    861.941 -   865.503:    0.1805%  (        2)
00:11:39.171    865.503 -   869.064:    0.1828%  (        3)
00:11:39.171    869.064 -   872.626:    0.1844%  (        2)
00:11:39.171    872.626 -   876.188:    0.1867%  (        3)
00:11:39.171    876.188 -   879.750:    0.1914%  (        6)
00:11:39.171    879.750 -   883.311:    0.1992%  (       10)
00:11:39.171    883.311 -   886.873:    0.2023%  (        4)
00:11:39.171    886.873 -   890.435:    0.2054%  (        4)
00:11:39.171    890.435 -   893.997:    0.2109%  (        7)
00:11:39.171    893.997 -   897.558:    0.2147%  (        5)
00:11:39.171    897.558 -   901.120:    0.2280%  (       17)
00:11:39.171    901.120 -   904.682:    0.2443%  (       21)
00:11:39.171    904.682 -   908.243:    0.2910%  (       60)
00:11:39.171    908.243 -   911.805:    0.4139%  (      158)
00:11:39.171    911.805 -   918.929:    0.8916%  (      614)
00:11:39.171    918.929 -   926.052:    2.0299%  (     1463)
00:11:39.171    926.052 -   933.176:    3.9548%  (     2474)
00:11:39.171    933.176 -   940.299:    6.7410%  (     3581)
00:11:39.171    940.299 -   947.423:   10.2142%  (     4464)
00:11:39.171    947.423 -   954.546:   14.1566%  (     5067)
00:11:39.171    954.546 -   961.670:   18.6614%  (     5790)
00:11:39.171    961.670 -   968.793:   23.9592%  (     6809)
00:11:39.171    968.793 -   975.917:   29.8638%  (     7589)
00:11:39.171    975.917 -   983.040:   35.4781%  (     7216)
00:11:39.171    983.040 -   990.163:   43.3520%  (    10120)
00:11:39.171    990.163 -   997.287:   50.4065%  (     9067)
00:11:39.171    997.287 -  1004.410:   60.6021%  (    13104)
00:11:39.171   1004.410 -  1011.534:   68.3576%  (     9968)
00:11:39.171   1011.534 -  1018.657:   75.2605%  (     8872)
00:11:39.171   1018.657 -  1025.781:   81.0250%  (     7409)
00:11:39.171   1025.781 -  1032.904:   86.0784%  (     6495)
00:11:39.171   1032.904 -  1040.028:   90.2176%  (     5320)
00:11:39.171   1040.028 -  1047.151:   93.7367%  (     4523)
00:11:39.171   1047.151 -  1054.275:   96.3673%  (     3381)
00:11:39.171   1054.275 -  1061.398:   98.0883%  (     2212)
00:11:39.171   1061.398 -  1068.522:   99.0127%  (     1188)
00:11:39.171   1068.522 -  1075.645:   99.4958%  (      621)
00:11:39.171   1075.645 -  1082.769:   99.5744%  (      101)
00:11:39.171   1082.769 -  1089.892:   99.6040%  (       38)
00:11:39.171   1089.892 -  1097.016:   99.6110%  (        9)
00:11:39.171   1097.016 -  1104.139:   99.6211%  (       13)
00:11:39.171   1104.139 -  1111.263:   99.6312%  (       13)
00:11:39.171   1111.263 -  1118.386:   99.6390%  (       10)
00:11:39.171   1118.386 -  1125.510:   99.6460%  (        9)
00:11:39.171   1125.510 -  1132.633:   99.6545%  (       11)
00:11:39.171   1132.633 -  1139.757:   99.6639%  (       12)
00:11:39.171   1139.757 -  1146.880:   99.6701%  (        8)
00:11:39.171   1146.880 -  1154.003:   99.6833%  (       17)
00:11:39.171   1154.003 -  1161.127:   99.7051%  (       28)
00:11:39.171   1161.127 -  1168.250:   99.7199%  (       19)
00:11:39.171   1168.250 -  1175.374:   99.7300%  (       13)
00:11:39.171   1175.374 -  1182.497:   99.7378%  (       10)
00:11:39.172   1182.497 -  1189.621:   99.7456%  (       10)
00:11:39.172   1189.621 -  1196.744:   99.7534%  (       10)
00:11:39.172   1196.744 -  1203.868:   99.7596%  (        8)
00:11:39.172   1203.868 -  1210.991:   99.7666%  (        9)
00:11:39.172   1210.991 -  1218.115:   99.7744%  (       10)
00:11:39.172   1218.115 -  1225.238:   99.7751%  (        1)
00:11:39.172   1225.238 -  1232.362:   99.7767%  (        2)
00:11:39.172   1232.362 -  1239.485:   99.7860%  (       12)
00:11:39.172   1303.597 -  1310.720:   99.7884%  (        3)
00:11:39.172   1310.720 -  1317.843:   99.7930%  (        6)
00:11:39.172   1317.843 -  1324.967:   99.7946%  (        2)
00:11:39.172   1367.708 -  1374.831:   99.7993%  (        6)
00:11:39.172   1374.831 -  1381.955:   99.8024%  (        4)
00:11:39.172   1381.955 -  1389.078:   99.8109%  (       11)
00:11:39.172   1389.078 -  1396.202:   99.8125%  (        2)
00:11:39.172   1467.437 -  1474.560:   99.8210%  (       11)
00:11:39.172   1552.918 -  1560.042:   99.8226%  (        2)
00:11:39.172   1560.042 -  1567.165:   99.8304%  (       10)
00:11:39.172   1624.153 -  1631.277:   99.8366%  (        8)
00:11:39.172   1631.277 -  1638.400:   99.8389%  (        3)
00:11:39.172   1716.758 -  1723.882:   99.8428%  (        5)
00:11:39.172   1723.882 -  1731.005:   99.8475%  (        6)
00:11:39.172   1759.499 -  1766.623:   99.8483%  (        1)
00:11:39.172   1795.117 -  1802.240:   99.8545%  (        8)
00:11:39.172   1802.240 -  1809.363:   99.8623%  (       10)
00:11:39.172   1809.363 -  1816.487:   99.8662%  (        5)
00:11:39.172   1816.487 -  1823.610:   99.8701%  (        5)
00:11:39.172   1823.610 -  1837.857:   99.8755%  (        7)
00:11:39.172   1837.857 -  1852.104:   99.8895%  (       18)
00:11:39.172   1852.104 -  1866.351:   99.9027%  (       17)
00:11:39.172   1866.351 -  1880.598:   99.9245%  (       28)
00:11:39.172   1880.598 -  1894.845:   99.9448%  (       26)
00:11:39.172   1894.845 -  1909.092:   99.9510%  (        8)
00:11:39.172   1909.092 -  1923.339:   99.9595%  (       11)
00:11:39.172   1923.339 -  1937.586:   99.9650%  (        7)
00:11:39.172   1951.833 -  1966.080:   99.9697%  (        6)
00:11:39.172   1966.080 -  1980.327:   99.9735%  (        5)
00:11:39.172   2037.315 -  2051.562:   99.9829%  (       12)
00:11:39.172   2108.550 -  2122.797:   99.9907%  (       10)
00:11:39.172   2122.797 -  2137.043:   99.9914%  (        1)
00:11:39.172   2194.031 -  2208.278:  100.0000%  (       11)
00:11:39.172  
00:11:39.172   10:49:27	-- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:11:39.172  
00:11:39.172  real	0m2.662s
00:11:39.172  user	0m2.180s
00:11:39.172  sys	0m0.355s
00:11:39.172   10:49:27	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:39.172   10:49:27	-- common/autotest_common.sh@10 -- # set +x
00:11:39.172  ************************************
00:11:39.172  END TEST nvme_perf
00:11:39.172  ************************************
00:11:39.172   10:49:27	-- nvme/nvme.sh@87 -- # run_test nvme_hello_world /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0
00:11:39.172   10:49:27	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:11:39.172   10:49:27	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:39.172   10:49:27	-- common/autotest_common.sh@10 -- # set +x
00:11:39.172  ************************************
00:11:39.172  START TEST nvme_hello_world
00:11:39.172  ************************************
00:11:39.172   10:49:27	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hello_world -i 0
00:11:39.432  Initializing NVMe Controllers
00:11:39.432  Attached to 0000:5e:00.0
00:11:39.432    Namespace ID: 1 size: 4000GB
00:11:39.432  Initialization complete.
00:11:39.432  INFO: using host memory buffer for IO
00:11:39.432  Hello world!
00:11:39.432  
00:11:39.432  real	0m0.316s
00:11:39.432  user	0m0.089s
00:11:39.432  sys	0m0.183s
00:11:39.432   10:49:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:39.432   10:49:28	-- common/autotest_common.sh@10 -- # set +x
00:11:39.432  ************************************
00:11:39.432  END TEST nvme_hello_world
00:11:39.432  ************************************
00:11:39.432   10:49:28	-- nvme/nvme.sh@88 -- # run_test nvme_sgl /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl
00:11:39.432   10:49:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:39.432   10:49:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:39.432   10:49:28	-- common/autotest_common.sh@10 -- # set +x
00:11:39.432  ************************************
00:11:39.432  START TEST nvme_sgl
00:11:39.432  ************************************
00:11:39.432   10:49:28	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sgl/sgl
00:11:40.003  NVMe Readv/Writev Request test
00:11:40.003  Attached to 0000:5e:00.0
00:11:40.003  0000:5e:00.0: build_io_request_0 test passed
00:11:40.003  0000:5e:00.0: build_io_request_1 test passed
00:11:40.003  0000:5e:00.0: build_io_request_2 test passed
00:11:40.003  0000:5e:00.0: build_io_request_3 test passed
00:11:40.003  0000:5e:00.0: build_io_request_4 test passed
00:11:40.003  0000:5e:00.0: build_io_request_5 test passed
00:11:40.003  0000:5e:00.0: build_io_request_6 test passed
00:11:40.003  0000:5e:00.0: build_io_request_7 test passed
00:11:40.003  0000:5e:00.0: build_io_request_8 test passed
00:11:40.003  0000:5e:00.0: build_io_request_9 test passed
00:11:40.003  0000:5e:00.0: build_io_request_10 test passed
00:11:40.003  0000:5e:00.0: build_io_request_11 test passed
00:11:40.003  Cleaning up...
00:11:40.003  
00:11:40.003  real	0m0.421s
00:11:40.003  user	0m0.187s
00:11:40.003  sys	0m0.184s
00:11:40.003   10:49:28	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:40.003   10:49:28	-- common/autotest_common.sh@10 -- # set +x
00:11:40.003  ************************************
00:11:40.003  END TEST nvme_sgl
00:11:40.003  ************************************
00:11:40.003   10:49:28	-- nvme/nvme.sh@89 -- # run_test nvme_e2edp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp
00:11:40.003   10:49:28	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:40.003   10:49:28	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:40.003   10:49:28	-- common/autotest_common.sh@10 -- # set +x
00:11:40.003  ************************************
00:11:40.003  START TEST nvme_e2edp
00:11:40.003  ************************************
00:11:40.003   10:49:28	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/e2edp/nvme_dp
00:11:40.263  NVMe Write/Read with End-to-End data protection test
00:11:40.263  Attached to 0000:5e:00.0
00:11:40.263  Cleaning up...
00:11:40.263  
00:11:40.263  real	0m0.290s
00:11:40.263  user	0m0.082s
00:11:40.263  sys	0m0.154s
00:11:40.263   10:49:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:40.263   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:40.263  ************************************
00:11:40.263  END TEST nvme_e2edp
00:11:40.263  ************************************
00:11:40.263   10:49:29	-- nvme/nvme.sh@90 -- # run_test nvme_reserve /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve
00:11:40.263   10:49:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:40.263   10:49:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:40.263   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:40.263  ************************************
00:11:40.263  START TEST nvme_reserve
00:11:40.263  ************************************
00:11:40.263   10:49:29	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/reserve/reserve
00:11:40.523  =====================================================
00:11:40.523  NVMe Controller at PCI bus 94, device 0, function 0
00:11:40.523  =====================================================
00:11:40.523  Reservations:                Not Supported
00:11:40.523  Reservation test passed
00:11:40.523  
00:11:40.523  real	0m0.312s
00:11:40.523  user	0m0.087s
00:11:40.523  sys	0m0.169s
00:11:40.523   10:49:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:40.523   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:40.523  ************************************
00:11:40.523  END TEST nvme_reserve
00:11:40.523  ************************************
00:11:40.523   10:49:29	-- nvme/nvme.sh@91 -- # run_test nvme_err_injection /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection
00:11:40.523   10:49:29	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:40.523   10:49:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:40.523   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:40.523  ************************************
00:11:40.523  START TEST nvme_err_injection
00:11:40.523  ************************************
00:11:40.523   10:49:29	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/err_injection/err_injection
00:11:41.094  NVMe Error Injection test
00:11:41.094  Attached to 0000:5e:00.0
00:11:41.094  0000:5e:00.0: get features failed as expected
00:11:41.094  0000:5e:00.0: get features successfully as expected
00:11:41.094  0000:5e:00.0: read failed as expected
00:11:41.094  0000:5e:00.0: read successfully as expected
00:11:41.094  Cleaning up...
00:11:41.094  
00:11:41.094  real	0m0.324s
00:11:41.094  user	0m0.086s
00:11:41.094  sys	0m0.181s
00:11:41.094   10:49:29	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:41.094   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:41.094  ************************************
00:11:41.094  END TEST nvme_err_injection
00:11:41.094  ************************************
00:11:41.094   10:49:29	-- nvme/nvme.sh@92 -- # run_test nvme_overhead /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:11:41.094   10:49:29	-- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']'
00:11:41.094   10:49:29	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:41.094   10:49:29	-- common/autotest_common.sh@10 -- # set +x
00:11:41.094  ************************************
00:11:41.094  START TEST nvme_overhead
00:11:41.094  ************************************
00:11:41.094   10:49:29	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:11:42.478  Initializing NVMe Controllers
00:11:42.478  Attached to 0000:5e:00.0
00:11:42.478  Initialization complete. Launching workers.
00:11:42.478  submit (in ns)   avg, min, max =   4698.9,   4384.3,  35476.5
00:11:42.478  complete (in ns) avg, min, max =   2745.0,   2650.4, 1072985.2
00:11:42.478  
00:11:42.478  Submit histogram
00:11:42.478  ================
00:11:42.478         Range in us     Cumulative     Count
00:11:42.478      4.369 -     4.397:    0.0159%  (       14)
00:11:42.478      4.397 -     4.424:    0.7920%  (      685)
00:11:42.478      4.424 -     4.452:    3.3027%  (     2216)
00:11:42.478      4.452 -     4.480:    8.0295%  (     4172)
00:11:42.478      4.480 -     4.508:   13.8780%  (     5162)
00:11:42.478      4.508 -     4.536:   22.1647%  (     7314)
00:11:42.478      4.536 -     4.563:   28.5672%  (     5651)
00:11:42.478      4.563 -     4.591:   35.1658%  (     5824)
00:11:42.478      4.591 -     4.619:   41.0448%  (     5189)
00:11:42.478      4.619 -     4.647:   47.5346%  (     5728)
00:11:42.478      4.647 -     4.675:   53.3593%  (     5141)
00:11:42.478      4.675 -     4.703:   58.7161%  (     4728)
00:11:42.478      4.703 -     4.730:   63.7896%  (     4478)
00:11:42.478      4.730 -     4.758:   69.3356%  (     4895)
00:11:42.478      4.758 -     4.786:   74.2562%  (     4343)
00:11:42.478      4.786 -     4.814:   78.2296%  (     3507)
00:11:42.478      4.814 -     4.842:   81.7441%  (     3102)
00:11:42.478      4.842 -     4.870:   84.9630%  (     2841)
00:11:42.478      4.870 -     4.897:   88.6327%  (     3239)
00:11:42.478      4.897 -     4.925:   91.9263%  (     2907)
00:11:42.478      4.925 -     4.953:   94.5231%  (     2292)
00:11:42.478      4.953 -     4.981:   96.2260%  (     1503)
00:11:42.478      4.981 -     5.009:   97.4972%  (     1122)
00:11:42.478      5.009 -     5.037:   98.4025%  (      799)
00:11:42.478      5.037 -     5.064:   98.9554%  (      488)
00:11:42.478      5.064 -     5.092:   99.3633%  (      360)
00:11:42.478      5.092 -     5.120:   99.4902%  (      112)
00:11:42.478      5.120 -     5.148:   99.5253%  (       31)
00:11:42.478      5.148 -     5.176:   99.5309%  (        5)
00:11:42.478      5.176 -     5.203:   99.5321%  (        1)
00:11:42.478      5.231 -     5.259:   99.5332%  (        1)
00:11:42.478      5.287 -     5.315:   99.5343%  (        1)
00:11:42.478      5.732 -     5.760:   99.5355%  (        1)
00:11:42.478      5.788 -     5.816:   99.5366%  (        1)
00:11:42.478      6.233 -     6.261:   99.5377%  (        1)
00:11:42.478      6.317 -     6.344:   99.5389%  (        1)
00:11:42.478      6.623 -     6.650:   99.5400%  (        1)
00:11:42.478      6.984 -     7.012:   99.5411%  (        1)
00:11:42.478      7.569 -     7.624:   99.5423%  (        1)
00:11:42.478      7.624 -     7.680:   99.5445%  (        2)
00:11:42.478      7.680 -     7.736:   99.5457%  (        1)
00:11:42.478      7.791 -     7.847:   99.5525%  (        6)
00:11:42.478      7.847 -     7.903:   99.5581%  (        5)
00:11:42.478      7.903 -     7.958:   99.5706%  (       11)
00:11:42.478      7.958 -     8.014:   99.5763%  (        5)
00:11:42.478      8.014 -     8.070:   99.5865%  (        9)
00:11:42.478      8.070 -     8.125:   99.6001%  (       12)
00:11:42.478      8.125 -     8.181:   99.6080%  (        7)
00:11:42.478      8.181 -     8.237:   99.6204%  (       11)
00:11:42.478      8.237 -     8.292:   99.6329%  (       11)
00:11:42.478      8.292 -     8.348:   99.6510%  (       16)
00:11:42.478      8.348 -     8.403:   99.6658%  (       13)
00:11:42.478      8.403 -     8.459:   99.6703%  (        4)
00:11:42.478      8.459 -     8.515:   99.6918%  (       19)
00:11:42.478      8.515 -     8.570:   99.7088%  (       15)
00:11:42.478      8.570 -     8.626:   99.7156%  (        6)
00:11:42.478      8.626 -     8.682:   99.7224%  (        6)
00:11:42.478      8.682 -     8.737:   99.7371%  (       13)
00:11:42.478      8.737 -     8.793:   99.7417%  (        4)
00:11:42.478      8.793 -     8.849:   99.7507%  (        8)
00:11:42.478      8.849 -     8.904:   99.7587%  (        7)
00:11:42.478      8.904 -     8.960:   99.7723%  (       12)
00:11:42.478      8.960 -     9.016:   99.7779%  (        5)
00:11:42.478      9.016 -     9.071:   99.7859%  (        7)
00:11:42.478      9.071 -     9.127:   99.7949%  (        8)
00:11:42.478      9.127 -     9.183:   99.8085%  (       12)
00:11:42.478      9.183 -     9.238:   99.8153%  (        6)
00:11:42.478      9.238 -     9.294:   99.8267%  (       10)
00:11:42.478      9.294 -     9.350:   99.8391%  (       11)
00:11:42.478      9.350 -     9.405:   99.8516%  (       11)
00:11:42.478      9.405 -     9.461:   99.8629%  (       10)
00:11:42.478      9.461 -     9.517:   99.8720%  (        8)
00:11:42.478      9.517 -     9.572:   99.8776%  (        5)
00:11:42.478      9.572 -     9.628:   99.8810%  (        3)
00:11:42.478      9.628 -     9.683:   99.8901%  (        8)
00:11:42.478      9.683 -     9.739:   99.8946%  (        4)
00:11:42.478      9.739 -     9.795:   99.9003%  (        5)
00:11:42.478      9.795 -     9.850:   99.9037%  (        3)
00:11:42.478      9.850 -     9.906:   99.9116%  (        7)
00:11:42.478      9.906 -     9.962:   99.9173%  (        5)
00:11:42.478      9.962 -    10.017:   99.9207%  (        3)
00:11:42.478     10.017 -    10.073:   99.9264%  (        5)
00:11:42.478     10.073 -    10.129:   99.9343%  (        7)
00:11:42.478     10.129 -    10.184:   99.9388%  (        4)
00:11:42.478     10.184 -    10.240:   99.9434%  (        4)
00:11:42.478     10.240 -    10.296:   99.9479%  (        4)
00:11:42.478     10.296 -    10.351:   99.9501%  (        2)
00:11:42.478     10.351 -    10.407:   99.9535%  (        3)
00:11:42.478     10.407 -    10.463:   99.9569%  (        3)
00:11:42.478     10.463 -    10.518:   99.9603%  (        3)
00:11:42.478     10.518 -    10.574:   99.9626%  (        2)
00:11:42.478     10.574 -    10.630:   99.9671%  (        4)
00:11:42.478     10.630 -    10.685:   99.9683%  (        1)
00:11:42.478     10.741 -    10.797:   99.9705%  (        2)
00:11:42.478     10.797 -    10.852:   99.9717%  (        1)
00:11:42.478     10.852 -    10.908:   99.9728%  (        1)
00:11:42.478     10.908 -    10.963:   99.9739%  (        1)
00:11:42.478     10.963 -    11.019:   99.9785%  (        4)
00:11:42.478     11.075 -    11.130:   99.9807%  (        2)
00:11:42.478     11.130 -    11.186:   99.9819%  (        1)
00:11:42.478     11.297 -    11.353:   99.9830%  (        1)
00:11:42.478     11.353 -    11.409:   99.9841%  (        1)
00:11:42.478     11.464 -    11.520:   99.9853%  (        1)
00:11:42.478     11.576 -    11.631:   99.9864%  (        1)
00:11:42.478     11.965 -    12.021:   99.9875%  (        1)
00:11:42.478     12.021 -    12.077:   99.9887%  (        1)
00:11:42.478     12.077 -    12.132:   99.9898%  (        1)
00:11:42.478     12.856 -    12.911:   99.9909%  (        1)
00:11:42.478     12.911 -    12.967:   99.9921%  (        1)
00:11:42.478     13.134 -    13.190:   99.9932%  (        1)
00:11:42.478     13.190 -    13.245:   99.9943%  (        1)
00:11:42.478     14.803 -    14.915:   99.9955%  (        1)
00:11:42.478     17.697 -    17.809:   99.9966%  (        1)
00:11:42.478     18.810 -    18.922:   99.9977%  (        1)
00:11:42.478     19.256 -    19.367:   99.9989%  (        1)
00:11:42.478     35.395 -    35.617:  100.0000%  (        1)
00:11:42.478  
00:11:42.478  Complete histogram
00:11:42.478  ==================
00:11:42.479         Range in us     Cumulative     Count
00:11:42.479      2.643 -     2.657:    0.0023%  (        2)
00:11:42.479      2.657 -     2.671:    0.0249%  (       20)
00:11:42.479      2.671 -     2.685:    2.0847%  (     1818)
00:11:42.479      2.685 -     2.699:   23.9775%  (    19323)
00:11:42.479      2.699 -     2.713:   70.0052%  (    40625)
00:11:42.479      2.713 -     2.727:   94.6228%  (    21728)
00:11:42.479      2.727 -     2.741:   98.7650%  (     3656)
00:11:42.479      2.741 -     2.755:   99.1695%  (      357)
00:11:42.479      2.755 -     2.769:   99.2545%  (       75)
00:11:42.479      2.769 -     2.783:   99.3043%  (       44)
00:11:42.479      2.783 -     2.797:   99.3553%  (       45)
00:11:42.479      2.797 -     2.810:   99.3882%  (       29)
00:11:42.479      2.810 -     2.824:   99.4244%  (       32)
00:11:42.479      2.824 -     2.838:   99.4448%  (       18)
00:11:42.479      2.838 -     2.852:   99.4494%  (        4)
00:11:42.479      2.852 -     2.866:   99.4516%  (        2)
00:11:42.479      2.866 -     2.880:   99.4528%  (        1)
00:11:42.479      2.880 -     2.894:   99.4539%  (        1)
00:11:42.479      2.950 -     2.963:   99.4550%  (        1)
00:11:42.479      2.963 -     2.977:   99.4562%  (        1)
00:11:42.479      5.510 -     5.537:   99.4573%  (        1)
00:11:42.479      5.565 -     5.593:   99.4607%  (        3)
00:11:42.479      5.704 -     5.732:   99.4652%  (        4)
00:11:42.479      5.732 -     5.760:   99.4686%  (        3)
00:11:42.479      5.760 -     5.788:   99.4720%  (        3)
00:11:42.479      5.788 -     5.816:   99.4777%  (        5)
00:11:42.479      5.816 -     5.843:   99.4800%  (        2)
00:11:42.479      5.843 -     5.871:   99.4856%  (        5)
00:11:42.479      5.871 -     5.899:   99.4913%  (        5)
00:11:42.479      5.899 -     5.927:   99.4936%  (        2)
00:11:42.479      5.927 -     5.955:   99.4992%  (        5)
00:11:42.479      5.955 -     5.983:   99.5026%  (        3)
00:11:42.479      5.983 -     6.010:   99.5049%  (        2)
00:11:42.479      6.010 -     6.038:   99.5139%  (        8)
00:11:42.479      6.038 -     6.066:   99.5162%  (        2)
00:11:42.479      6.066 -     6.094:   99.5253%  (        8)
00:11:42.479      6.094 -     6.122:   99.5355%  (        9)
00:11:42.479      6.122 -     6.150:   99.5423%  (        6)
00:11:42.479      6.150 -     6.177:   99.5491%  (        6)
00:11:42.479      6.177 -     6.205:   99.5559%  (        6)
00:11:42.479      6.205 -     6.233:   99.5695%  (       12)
00:11:42.479      6.233 -     6.261:   99.5774%  (        7)
00:11:42.479      6.261 -     6.289:   99.5853%  (        7)
00:11:42.479      6.289 -     6.317:   99.5978%  (       11)
00:11:42.479      6.317 -     6.344:   99.6046%  (        6)
00:11:42.479      6.344 -     6.372:   99.6091%  (        4)
00:11:42.479      6.372 -     6.400:   99.6182%  (        8)
00:11:42.479      6.400 -     6.428:   99.6284%  (        9)
00:11:42.479      6.428 -     6.456:   99.6318%  (        3)
00:11:42.479      6.456 -     6.483:   99.6420%  (        9)
00:11:42.479      6.483 -     6.511:   99.6556%  (       12)
00:11:42.479      6.511 -     6.539:   99.6658%  (        9)
00:11:42.479      6.539 -     6.567:   99.6760%  (        9)
00:11:42.479      6.567 -     6.595:   99.6862%  (        9)
00:11:42.479      6.595 -     6.623:   99.6952%  (        8)
00:11:42.479      6.623 -     6.650:   99.6975%  (        2)
00:11:42.479      6.650 -     6.678:   99.7054%  (        7)
00:11:42.479      6.678 -     6.706:   99.7168%  (       10)
00:11:42.479      6.706 -     6.734:   99.7247%  (        7)
00:11:42.479      6.734 -     6.762:   99.7315%  (        6)
00:11:42.479      6.762 -     6.790:   99.7394%  (        7)
00:11:42.479      6.790 -     6.817:   99.7496%  (        9)
00:11:42.479      6.817 -     6.845:   99.7564%  (        6)
00:11:42.479      6.845 -     6.873:   99.7587%  (        2)
00:11:42.479      6.873 -     6.901:   99.7632%  (        4)
00:11:42.479      6.901 -     6.929:   99.7734%  (        9)
00:11:42.479      6.929 -     6.957:   99.7847%  (       10)
00:11:42.479      6.957 -     6.984:   99.7915%  (        6)
00:11:42.479      6.984 -     7.012:   99.7983%  (        6)
00:11:42.479      7.012 -     7.040:   99.7995%  (        1)
00:11:42.479      7.040 -     7.068:   99.8063%  (        6)
00:11:42.479      7.068 -     7.096:   99.8153%  (        8)
00:11:42.479      7.096 -     7.123:   99.8244%  (        8)
00:11:42.479      7.123 -     7.179:   99.8335%  (        8)
00:11:42.479      7.179 -     7.235:   99.8380%  (        4)
00:11:42.479      7.235 -     7.290:   99.8470%  (        8)
00:11:42.479      7.290 -     7.346:   99.8504%  (        3)
00:11:42.479      7.346 -     7.402:   99.8572%  (        6)
00:11:42.479      7.402 -     7.457:   99.8595%  (        2)
00:11:42.479      7.457 -     7.513:   99.8663%  (        6)
00:11:42.479      7.513 -     7.569:   99.8742%  (        7)
00:11:42.479      7.569 -     7.624:   99.8799%  (        5)
00:11:42.479      7.624 -     7.680:   99.8890%  (        8)
00:11:42.479      7.680 -     7.736:   99.8958%  (        6)
00:11:42.479      7.736 -     7.791:   99.9048%  (        8)
00:11:42.479      7.791 -     7.847:   99.9105%  (        5)
00:11:42.479      7.847 -     7.903:   99.9173%  (        6)
00:11:42.479      7.903 -     7.958:   99.9264%  (        8)
00:11:42.479      7.958 -     8.014:   99.9298%  (        3)
00:11:42.479      8.014 -     8.070:   99.9343%  (        4)
00:11:42.479      8.070 -     8.125:   99.9354%  (        1)
00:11:42.479      8.125 -     8.181:   99.9377%  (        2)
00:11:42.479      8.181 -     8.237:   99.9411%  (        3)
00:11:42.479      8.237 -     8.292:   99.9467%  (        5)
00:11:42.479      8.292 -     8.348:   99.9490%  (        2)
00:11:42.479      8.348 -     8.403:   99.9513%  (        2)
00:11:42.479      8.403 -     8.459:   99.9569%  (        5)
00:11:42.479      8.459 -     8.515:   99.9603%  (        3)
00:11:42.479      8.515 -     8.570:   99.9660%  (        5)
00:11:42.479      8.570 -     8.626:   99.9683%  (        2)
00:11:42.479      8.626 -     8.682:   99.9705%  (        2)
00:11:42.479      8.682 -     8.737:   99.9717%  (        1)
00:11:42.479      8.737 -     8.793:   99.9762%  (        4)
00:11:42.479      8.793 -     8.849:   99.9785%  (        2)
00:11:42.479      8.849 -     8.904:   99.9796%  (        1)
00:11:42.479      8.904 -     8.960:   99.9819%  (        2)
00:11:42.479      9.016 -     9.071:   99.9830%  (        1)
00:11:42.479      9.238 -     9.294:   99.9841%  (        1)
00:11:42.479      9.405 -     9.461:   99.9853%  (        1)
00:11:42.479     10.017 -    10.073:   99.9864%  (        1)
00:11:42.479     10.630 -    10.685:   99.9875%  (        1)
00:11:42.479     10.797 -    10.852:   99.9887%  (        1)
00:11:42.479     10.852 -    10.908:   99.9898%  (        1)
00:11:42.479     11.910 -    11.965:   99.9909%  (        1)
00:11:42.479     12.132 -    12.188:   99.9921%  (        1)
00:11:42.479     12.299 -    12.355:   99.9932%  (        1)
00:11:42.479     16.362 -    16.473:   99.9943%  (        1)
00:11:42.479     17.475 -    17.586:   99.9955%  (        1)
00:11:42.479     19.701 -    19.812:   99.9966%  (        1)
00:11:42.479     36.508 -    36.730:   99.9977%  (        1)
00:11:42.479    191.443 -   192.334:   99.9989%  (        1)
00:11:42.479   1068.522 -  1075.645:  100.0000%  (        1)
00:11:42.479  
00:11:42.479  
00:11:42.479  real	0m1.344s
00:11:42.479  user	0m1.087s
00:11:42.479  sys	0m0.188s
00:11:42.479   10:49:31	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:42.479   10:49:31	-- common/autotest_common.sh@10 -- # set +x
00:11:42.479  ************************************
00:11:42.479  END TEST nvme_overhead
00:11:42.479  ************************************
00:11:42.479   10:49:31	-- nvme/nvme.sh@93 -- # run_test nvme_arbitration /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0
00:11:42.479   10:49:31	-- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']'
00:11:42.479   10:49:31	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:42.479   10:49:31	-- common/autotest_common.sh@10 -- # set +x
00:11:42.479  ************************************
00:11:42.479  START TEST nvme_arbitration
00:11:42.479  ************************************
00:11:42.479   10:49:31	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -t 3 -i 0
00:11:45.775  Initializing NVMe Controllers
00:11:45.775  Attached to 0000:5e:00.0
00:11:45.775  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 0
00:11:45.775  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 1
00:11:45.775  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 2
00:11:45.775  Associating INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) with lcore 3
00:11:45.775  /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:11:45.775  /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:11:45.775  Initialization complete. Launching workers.
00:11:45.775  Starting thread on core 1 with urgent priority queue
00:11:45.775  Starting thread on core 2 with urgent priority queue
00:11:45.775  Starting thread on core 3 with urgent priority queue
00:11:45.775  Starting thread on core 0 with urgent priority queue
00:11:45.775  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 0: 10651.00 IO/s     9.39 secs/100000 ios
00:11:45.775  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 1: 10784.00 IO/s     9.27 secs/100000 ios
00:11:45.775  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 2:  8600.00 IO/s    11.63 secs/100000 ios
00:11:45.775  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ) core 3:  7573.67 IO/s    13.20 secs/100000 ios
00:11:45.775  ========================================================
00:11:45.775  
00:11:45.775  
00:11:45.775  real	0m3.363s
00:11:45.775  user	0m9.164s
00:11:45.775  sys	0m0.188s
00:11:45.775   10:49:34	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:45.775   10:49:34	-- common/autotest_common.sh@10 -- # set +x
00:11:45.775  ************************************
00:11:45.775  END TEST nvme_arbitration
00:11:45.775  ************************************
00:11:45.775   10:49:34	-- nvme/nvme.sh@94 -- # run_test nvme_single_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log
00:11:45.775   10:49:34	-- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']'
00:11:45.775   10:49:34	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:45.775   10:49:34	-- common/autotest_common.sh@10 -- # set +x
00:11:45.775  ************************************
00:11:45.775  START TEST nvme_single_aen
00:11:45.775  ************************************
00:11:45.775   10:49:34	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -T -i 0 -L log
00:11:45.775  [2024-12-15 10:49:34.722133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:11:45.775  [2024-12-15 10:49:34.722170] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:46.035  [2024-12-15 10:49:34.975744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:11:46.035  [2024-12-15 10:49:34.975789] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2127748) is not found. Dropping the request.
00:11:46.035  [2024-12-15 10:49:34.975815] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2127748) is not found. Dropping the request.
00:11:46.035  [2024-12-15 10:49:34.975831] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2127748) is not found. Dropping the request.
00:11:46.035  [2024-12-15 10:49:34.975847] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2127748) is not found. Dropping the request.
00:11:51.316  Asynchronous Event Request test
00:11:51.316  Attached to 0000:5e:00.0
00:11:51.316  Reset controller to setup AER completions for this process
00:11:51.316  Registering asynchronous event callbacks...
00:11:51.316  Getting orig temperature thresholds of all controllers
00:11:51.316  0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:11:51.316  Setting all controllers temperature threshold low to trigger AER
00:11:51.316  Waiting for all controllers temperature threshold to be set lower
00:11:51.316  Waiting for all controllers to trigger AER and reset threshold
00:11:51.316  0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:11:51.316  aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0
00:11:51.316  0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:11:51.316  Cleaning up...
00:11:51.316  
00:11:51.316  real	0m5.386s
00:11:51.316  user	0m4.437s
00:11:51.317  sys	0m0.881s
00:11:51.317   10:49:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:11:51.317   10:49:40	-- common/autotest_common.sh@10 -- # set +x
00:11:51.317  ************************************
00:11:51.317  END TEST nvme_single_aen
00:11:51.317  ************************************
00:11:51.317   10:49:40	-- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:11:51.317   10:49:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:11:51.317   10:49:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:11:51.317   10:49:40	-- common/autotest_common.sh@10 -- # set +x
00:11:51.317  ************************************
00:11:51.317  START TEST nvme_doorbell_aers
00:11:51.317  ************************************
00:11:51.317   10:49:40	-- common/autotest_common.sh@1114 -- # nvme_doorbell_aers
00:11:51.317   10:49:40	-- nvme/nvme.sh@70 -- # bdfs=()
00:11:51.317   10:49:40	-- nvme/nvme.sh@70 -- # local bdfs bdf
00:11:51.317   10:49:40	-- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:11:51.317    10:49:40	-- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:11:51.317    10:49:40	-- common/autotest_common.sh@1508 -- # bdfs=()
00:11:51.317    10:49:40	-- common/autotest_common.sh@1508 -- # local bdfs
00:11:51.317    10:49:40	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:11:51.317     10:49:40	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:11:51.317     10:49:40	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:11:51.317    10:49:40	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:11:51.317    10:49:40	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:11:51.317   10:49:40	-- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:11:51.317   10:49:40	-- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:5e:00.0'
00:11:51.887  [2024-12-15 10:49:40.648605] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2131368) is not found. Dropping the request.
00:12:01.990  Executing: test_write_invalid_db
00:12:01.990  Waiting for AER completion...
00:12:01.990  Failure: test_write_invalid_db
00:12:01.990  
00:12:01.990  Executing: test_invalid_db_write_overflow_sq
00:12:01.990  Waiting for AER completion...
00:12:01.990  Failure: test_invalid_db_write_overflow_sq
00:12:01.990  
00:12:01.990  Executing: test_invalid_db_write_overflow_cq
00:12:01.990  Waiting for AER completion...
00:12:01.990  Failure: test_invalid_db_write_overflow_cq
00:12:01.990  
00:12:01.990  
00:12:01.990  real	0m10.122s
00:12:01.990  user	0m7.119s
00:12:01.990  sys	0m2.900s
00:12:01.990   10:49:50	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:01.990   10:49:50	-- common/autotest_common.sh@10 -- # set +x
00:12:01.990  ************************************
00:12:01.990  END TEST nvme_doorbell_aers
00:12:01.990  ************************************
00:12:01.990    10:49:50	-- nvme/nvme.sh@97 -- # uname
00:12:01.990   10:49:50	-- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:12:01.990   10:49:50	-- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:12:01.990   10:49:50	-- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']'
00:12:01.990   10:49:50	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:01.990   10:49:50	-- common/autotest_common.sh@10 -- # set +x
00:12:01.990  ************************************
00:12:01.990  START TEST nvme_multi_aen
00:12:01.990  ************************************
00:12:01.990   10:49:50	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/aer/aer -m -T -i 0 -L log
00:12:01.990  [2024-12-15 10:49:50.340053] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:01.990  [2024-12-15 10:49:50.340100] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:01.990  [2024-12-15 10:49:50.599567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:12:01.990  [2024-12-15 10:49:50.599612] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2131368) is not found. Dropping the request.
00:12:01.990  [2024-12-15 10:49:50.599643] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2131368) is not found. Dropping the request.
00:12:01.990  [2024-12-15 10:49:50.599660] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2131368) is not found. Dropping the request.
00:12:01.990  [2024-12-15 10:49:50.603919] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:01.990  [2024-12-15 10:49:50.604020] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:01.990  Child process pid: 2133526
00:12:07.280  [Child] Asynchronous Event Request test
00:12:07.280  [Child] Attached to 0000:5e:00.0
00:12:07.280  [Child] Registering asynchronous event callbacks...
00:12:07.280  [Child] Getting orig temperature thresholds of all controllers
00:12:07.280  [Child] 0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:12:07.280  [Child] Waiting for all controllers to trigger AER and reset threshold
00:12:07.280  [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:12:07.280  [Child] 0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:12:07.280  [Child] 0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:12:07.280  [Child] Cleaning up...
00:12:07.280  [Child] 0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:12:07.280  Asynchronous Event Request test
00:12:07.280  Attached to 0000:5e:00.0
00:12:07.280  Reset controller to setup AER completions for this process
00:12:07.280  Registering asynchronous event callbacks...
00:12:07.280  Getting orig temperature thresholds of all controllers
00:12:07.280  0000:5e:00.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:12:07.280  Setting all controllers temperature threshold low to trigger AER
00:12:07.280  Waiting for all controllers temperature threshold to be set lower
00:12:07.280  Waiting for all controllers to trigger AER and reset threshold
00:12:07.280  0000:5e:00.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:12:07.280  aer_cb - Resetting Temp Threshold for device: 0000:5e:00.0
00:12:07.280  0000:5e:00.0: Current Temperature:         310 Kelvin (37 Celsius)
00:12:07.280  Cleaning up...
00:12:07.280  
00:12:07.280  real	0m4.970s
00:12:07.280  user	0m3.955s
00:12:07.280  sys	0m2.144s
00:12:07.280   10:49:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:07.280   10:49:55	-- common/autotest_common.sh@10 -- # set +x
00:12:07.280  ************************************
00:12:07.280  END TEST nvme_multi_aen
00:12:07.280  ************************************
00:12:07.280   10:49:55	-- nvme/nvme.sh@99 -- # run_test nvme_startup /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000
00:12:07.280   10:49:55	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:12:07.280   10:49:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:07.280   10:49:55	-- common/autotest_common.sh@10 -- # set +x
00:12:07.280  ************************************
00:12:07.280  START TEST nvme_startup
00:12:07.280  ************************************
00:12:07.280   10:49:55	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/startup/startup -t 1000000
00:12:07.280  Initializing NVMe Controllers
00:12:07.280  Attached to 0000:5e:00.0
00:12:07.280  Initialization complete.
00:12:07.280  Time used:250709.500      (us).
00:12:07.280  
00:12:07.280  real	0m0.300s
00:12:07.280  user	0m0.077s
00:12:07.280  sys	0m0.181s
00:12:07.280   10:49:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:07.280   10:49:55	-- common/autotest_common.sh@10 -- # set +x
00:12:07.280  ************************************
00:12:07.280  END TEST nvme_startup
00:12:07.280  ************************************
00:12:07.280   10:49:55	-- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:12:07.280   10:49:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:07.280   10:49:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:07.280   10:49:55	-- common/autotest_common.sh@10 -- # set +x
00:12:07.280  ************************************
00:12:07.280  START TEST nvme_multi_secondary
00:12:07.280  ************************************
00:12:07.280   10:49:55	-- common/autotest_common.sh@1114 -- # nvme_multi_secondary
00:12:07.280   10:49:55	-- nvme/nvme.sh@52 -- # pid0=2134237
00:12:07.280   10:49:55	-- nvme/nvme.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:12:07.280   10:49:55	-- nvme/nvme.sh@54 -- # pid1=2134238
00:12:07.280   10:49:55	-- nvme/nvme.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:12:07.280   10:49:55	-- nvme/nvme.sh@53 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:12:10.576  Initializing NVMe Controllers
00:12:10.576  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:10.576  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2
00:12:10.576  Initialization complete. Launching workers.
00:12:10.576  ========================================================
00:12:10.576                                                                             Latency(us)
00:12:10.576  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:10.576  PCIE (0000:5e:00.0) NSID 1 from core  2:   41983.94     164.00     380.70      23.50    6863.51
00:12:10.576  ========================================================
00:12:10.576  Total                                  :   41983.94     164.00     380.70      23.50    6863.51
00:12:10.576  
00:12:10.576   10:49:59	-- nvme/nvme.sh@56 -- # wait 2134237
00:12:10.576  Initializing NVMe Controllers
00:12:10.576  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:10.576  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1
00:12:10.576  Initialization complete. Launching workers.
00:12:10.576  ========================================================
00:12:10.576                                                                             Latency(us)
00:12:10.576  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:10.576  PCIE (0000:5e:00.0) NSID 1 from core  1:   80816.01     315.69     197.66      26.67    4647.89
00:12:10.576  ========================================================
00:12:10.576  Total                                  :   80816.01     315.69     197.66      26.67    4647.89
00:12:10.576  
00:12:12.488  Initializing NVMe Controllers
00:12:12.488  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:12.488  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:12:12.488  Initialization complete. Launching workers.
00:12:12.488  ========================================================
00:12:12.488                                                                             Latency(us)
00:12:12.488  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:12.488  PCIE (0000:5e:00.0) NSID 1 from core  0:   82524.02     322.36     193.56      26.80    3406.51
00:12:12.488  ========================================================
00:12:12.488  Total                                  :   82524.02     322.36     193.56      26.80    3406.51
00:12:12.488  
00:12:12.488   10:50:01	-- nvme/nvme.sh@57 -- # wait 2134238
00:12:12.488   10:50:01	-- nvme/nvme.sh@61 -- # pid0=2134979
00:12:12.488   10:50:01	-- nvme/nvme.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:12:12.488   10:50:01	-- nvme/nvme.sh@63 -- # pid1=2134981
00:12:12.488   10:50:01	-- nvme/nvme.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:12:12.488   10:50:01	-- nvme/nvme.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:12:15.781  Initializing NVMe Controllers
00:12:15.781  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:15.781  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 1
00:12:15.781  Initialization complete. Launching workers.
00:12:15.781  ========================================================
00:12:15.781                                                                             Latency(us)
00:12:15.781  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:15.781  PCIE (0000:5e:00.0) NSID 1 from core  1:   81312.81     317.63     196.45      25.66    3234.19
00:12:15.781  ========================================================
00:12:15.781  Total                                  :   81312.81     317.63     196.45      25.66    3234.19
00:12:15.781  
00:12:15.781  Initializing NVMe Controllers
00:12:15.781  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:15.781  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:12:15.781  Initialization complete. Launching workers.
00:12:15.781  ========================================================
00:12:15.781                                                                             Latency(us)
00:12:15.781  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:15.781  PCIE (0000:5e:00.0) NSID 1 from core  0:   80580.67     314.77     198.24      25.13    3620.38
00:12:15.781  ========================================================
00:12:15.781  Total                                  :   80580.67     314.77     198.24      25.13    3620.38
00:12:15.781  
00:12:18.320  Initializing NVMe Controllers
00:12:18.320  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:12:18.320  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 2
00:12:18.320  Initialization complete. Launching workers.
00:12:18.320  ========================================================
00:12:18.320                                                                             Latency(us)
00:12:18.320  Device Information                     :       IOPS      MiB/s    Average        min        max
00:12:18.320  PCIE (0000:5e:00.0) NSID 1 from core  2:   44980.72     175.71     355.19      22.77    5663.70
00:12:18.320  ========================================================
00:12:18.320  Total                                  :   44980.72     175.71     355.19      22.77    5663.70
00:12:18.320  
00:12:18.320   10:50:06	-- nvme/nvme.sh@65 -- # wait 2134979
00:12:18.320   10:50:06	-- nvme/nvme.sh@66 -- # wait 2134981
00:12:18.320  
00:12:18.320  real	0m11.095s
00:12:18.320  user	0m18.452s
00:12:18.320  sys	0m1.130s
00:12:18.320   10:50:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:18.320   10:50:06	-- common/autotest_common.sh@10 -- # set +x
00:12:18.320  ************************************
00:12:18.320  END TEST nvme_multi_secondary
00:12:18.320  ************************************
00:12:18.320   10:50:06	-- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:12:18.320   10:50:06	-- nvme/nvme.sh@102 -- # kill_stub
00:12:18.320   10:50:06	-- common/autotest_common.sh@1075 -- # [[ -e /proc/2127080 ]]
00:12:18.320   10:50:06	-- common/autotest_common.sh@1076 -- # kill 2127080
00:12:18.320   10:50:06	-- common/autotest_common.sh@1077 -- # wait 2127080
00:12:18.580  [2024-12-15 10:50:07.530590] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2133513) is not found. Dropping the request.
00:12:18.580  [2024-12-15 10:50:07.530689] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2133513) is not found. Dropping the request.
00:12:18.580  [2024-12-15 10:50:07.530728] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2133513) is not found. Dropping the request.
00:12:18.580  [2024-12-15 10:50:07.530764] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 2133513) is not found. Dropping the request.
00:12:22.777   10:50:11	-- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0
00:12:22.777   10:50:11	-- common/autotest_common.sh@1083 -- # echo 2
00:12:22.777   10:50:11	-- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:12:22.777   10:50:11	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:22.777   10:50:11	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:22.777   10:50:11	-- common/autotest_common.sh@10 -- # set +x
00:12:22.777  ************************************
00:12:22.777  START TEST bdev_nvme_reset_stuck_adm_cmd
00:12:22.777  ************************************
00:12:22.777   10:50:11	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:12:22.777  * Looking for test storage...
00:12:22.777  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:12:22.777    10:50:11	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:22.777     10:50:11	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:22.777     10:50:11	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:22.777    10:50:11	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:22.777    10:50:11	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:22.777    10:50:11	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:22.777    10:50:11	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:22.777    10:50:11	-- scripts/common.sh@335 -- # IFS=.-:
00:12:22.777    10:50:11	-- scripts/common.sh@335 -- # read -ra ver1
00:12:22.777    10:50:11	-- scripts/common.sh@336 -- # IFS=.-:
00:12:22.777    10:50:11	-- scripts/common.sh@336 -- # read -ra ver2
00:12:22.777    10:50:11	-- scripts/common.sh@337 -- # local 'op=<'
00:12:22.777    10:50:11	-- scripts/common.sh@339 -- # ver1_l=2
00:12:22.777    10:50:11	-- scripts/common.sh@340 -- # ver2_l=1
00:12:22.777    10:50:11	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:22.777    10:50:11	-- scripts/common.sh@343 -- # case "$op" in
00:12:22.777    10:50:11	-- scripts/common.sh@344 -- # : 1
00:12:22.777    10:50:11	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:22.777    10:50:11	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:22.777     10:50:11	-- scripts/common.sh@364 -- # decimal 1
00:12:22.777     10:50:11	-- scripts/common.sh@352 -- # local d=1
00:12:22.777     10:50:11	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:22.777     10:50:11	-- scripts/common.sh@354 -- # echo 1
00:12:22.777    10:50:11	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:22.777     10:50:11	-- scripts/common.sh@365 -- # decimal 2
00:12:22.777     10:50:11	-- scripts/common.sh@352 -- # local d=2
00:12:22.777     10:50:11	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:22.777     10:50:11	-- scripts/common.sh@354 -- # echo 2
00:12:22.777    10:50:11	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:22.777    10:50:11	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:22.777    10:50:11	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:22.777    10:50:11	-- scripts/common.sh@367 -- # return 0
00:12:22.777    10:50:11	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:22.777    10:50:11	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:22.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:22.777  		--rc genhtml_branch_coverage=1
00:12:22.777  		--rc genhtml_function_coverage=1
00:12:22.777  		--rc genhtml_legend=1
00:12:22.777  		--rc geninfo_all_blocks=1
00:12:22.777  		--rc geninfo_unexecuted_blocks=1
00:12:22.777  		
00:12:22.777  		'
00:12:22.777    10:50:11	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:22.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:22.777  		--rc genhtml_branch_coverage=1
00:12:22.777  		--rc genhtml_function_coverage=1
00:12:22.777  		--rc genhtml_legend=1
00:12:22.777  		--rc geninfo_all_blocks=1
00:12:22.777  		--rc geninfo_unexecuted_blocks=1
00:12:22.777  		
00:12:22.777  		'
00:12:22.777    10:50:11	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:22.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:22.777  		--rc genhtml_branch_coverage=1
00:12:22.777  		--rc genhtml_function_coverage=1
00:12:22.777  		--rc genhtml_legend=1
00:12:22.777  		--rc geninfo_all_blocks=1
00:12:22.777  		--rc geninfo_unexecuted_blocks=1
00:12:22.777  		
00:12:22.777  		'
00:12:22.777    10:50:11	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:22.777  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:22.777  		--rc genhtml_branch_coverage=1
00:12:22.777  		--rc genhtml_function_coverage=1
00:12:22.777  		--rc genhtml_legend=1
00:12:22.777  		--rc geninfo_all_blocks=1
00:12:22.777  		--rc geninfo_unexecuted_blocks=1
00:12:22.777  		
00:12:22.777  		'
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:12:22.777    10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:12:22.777    10:50:11	-- common/autotest_common.sh@1519 -- # bdfs=()
00:12:22.777    10:50:11	-- common/autotest_common.sh@1519 -- # local bdfs
00:12:22.777    10:50:11	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:12:22.777     10:50:11	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:12:22.777     10:50:11	-- common/autotest_common.sh@1508 -- # bdfs=()
00:12:22.777     10:50:11	-- common/autotest_common.sh@1508 -- # local bdfs
00:12:22.777     10:50:11	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:22.777      10:50:11	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:22.777      10:50:11	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:12:22.777     10:50:11	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:12:22.777     10:50:11	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:12:22.777    10:50:11	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:5e:00.0
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:5e:00.0 ']'
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=2136410
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:12:22.777   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 2136410
00:12:22.777   10:50:11	-- common/autotest_common.sh@829 -- # '[' -z 2136410 ']'
00:12:22.778   10:50:11	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:22.778   10:50:11	-- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0xF
00:12:22.778   10:50:11	-- common/autotest_common.sh@834 -- # local max_retries=100
00:12:22.778   10:50:11	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:22.778  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:22.778   10:50:11	-- common/autotest_common.sh@838 -- # xtrace_disable
00:12:22.778   10:50:11	-- common/autotest_common.sh@10 -- # set +x
00:12:22.778  [2024-12-15 10:50:11.728677] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:12:22.778  [2024-12-15 10:50:11.728750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2136410 ]
00:12:23.037  EAL: No free 2048 kB hugepages reported on node 1
00:12:23.037  [2024-12-15 10:50:11.857645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:23.037  [2024-12-15 10:50:11.963640] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:12:23.037  [2024-12-15 10:50:11.963826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:12:23.037  [2024-12-15 10:50:11.963918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:12:23.037  [2024-12-15 10:50:11.963999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:12:23.037  [2024-12-15 10:50:11.964003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.297  [2024-12-15 10:50:12.164733] 'OCF_Core' volume operations registered
00:12:23.297  [2024-12-15 10:50:12.168219] 'OCF_Cache' volume operations registered
00:12:23.297  [2024-12-15 10:50:12.172156] 'OCF Composite' volume operations registered
00:12:23.297  [2024-12-15 10:50:12.175646] 'SPDK_block_device' volume operations registered
00:12:23.866   10:50:12	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:12:23.866   10:50:12	-- common/autotest_common.sh@862 -- # return 0
00:12:23.866   10:50:12	-- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0
00:12:23.866   10:50:12	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:23.866   10:50:12	-- common/autotest_common.sh@10 -- # set +x
00:12:27.162  nvme0n1
00:12:27.162   10:50:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:27.162    10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:12:27.162   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ba40t.txt
00:12:27.162   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:12:27.162   10:50:15	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:27.162   10:50:15	-- common/autotest_common.sh@10 -- # set +x
00:12:27.162  true
00:12:27.162   10:50:15	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:27.162    10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:12:27.162   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734256215
00:12:27.163   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=2136957
00:12:27.163   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:12:27.163   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:12:27.163   10:50:15	-- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:12:29.070   10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:12:29.070   10:50:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:29.070   10:50:17	-- common/autotest_common.sh@10 -- # set +x
00:12:29.070  [2024-12-15 10:50:17.578102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:12:29.070  [2024-12-15 10:50:17.578343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:12:29.070  [2024-12-15 10:50:17.578365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:12:29.070  [2024-12-15 10:50:17.578381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:12:29.070  [2024-12-15 10:50:17.579604] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful.
00:12:29.070   10:50:17	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:29.070   10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 2136957
00:12:29.070  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 2136957
00:12:29.070   10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 2136957
00:12:29.070    10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:12:29.070   10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:12:29.070   10:50:17	-- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:12:29.070   10:50:17	-- common/autotest_common.sh@561 -- # xtrace_disable
00:12:29.070   10:50:17	-- common/autotest_common.sh@10 -- # set +x
00:12:32.366   10:50:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ba40t.txt
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:12:32.366     10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:12:32.366     10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:12:32.366      10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:12:32.366     10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:12:32.366     10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:12:32.366      10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:12:32.366    10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ba40t.txt
00:12:32.366   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 2136410
00:12:32.366   10:50:21	-- common/autotest_common.sh@936 -- # '[' -z 2136410 ']'
00:12:32.366   10:50:21	-- common/autotest_common.sh@940 -- # kill -0 2136410
00:12:32.366    10:50:21	-- common/autotest_common.sh@941 -- # uname
00:12:32.366   10:50:21	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:12:32.366    10:50:21	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2136410
00:12:32.626   10:50:21	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:12:32.626   10:50:21	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:12:32.626   10:50:21	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2136410'
00:12:32.626  killing process with pid 2136410
00:12:32.626   10:50:21	-- common/autotest_common.sh@955 -- # kill 2136410
00:12:32.626   10:50:21	-- common/autotest_common.sh@960 -- # wait 2136410
00:12:33.196   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:12:33.196   10:50:21	-- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:12:33.196  
00:12:33.196  real	0m10.554s
00:12:33.196  user	0m39.420s
00:12:33.196  sys	0m0.903s
00:12:33.196   10:50:21	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:33.196   10:50:21	-- common/autotest_common.sh@10 -- # set +x
00:12:33.196  ************************************
00:12:33.196  END TEST bdev_nvme_reset_stuck_adm_cmd
00:12:33.196  ************************************
00:12:33.196   10:50:22	-- nvme/nvme.sh@107 -- # [[ y == y ]]
00:12:33.196   10:50:22	-- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:12:33.196   10:50:22	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:33.196   10:50:22	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:33.196   10:50:22	-- common/autotest_common.sh@10 -- # set +x
00:12:33.196  ************************************
00:12:33.196  START TEST nvme_fio
00:12:33.196  ************************************
00:12:33.196   10:50:22	-- common/autotest_common.sh@1114 -- # nvme_fio_test
00:12:33.196   10:50:22	-- nvme/nvme.sh@31 -- # PLUGIN_DIR=/var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme
00:12:33.196   10:50:22	-- nvme/nvme.sh@32 -- # ran_fio=false
00:12:33.196    10:50:22	-- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:12:33.196    10:50:22	-- common/autotest_common.sh@1508 -- # bdfs=()
00:12:33.196    10:50:22	-- common/autotest_common.sh@1508 -- # local bdfs
00:12:33.196    10:50:22	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:12:33.196     10:50:22	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:12:33.196     10:50:22	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:12:33.196    10:50:22	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:12:33.196    10:50:22	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:12:33.196   10:50:22	-- nvme/nvme.sh@33 -- # bdfs=('0000:5e:00.0')
00:12:33.196   10:50:22	-- nvme/nvme.sh@33 -- # local bdfs bdf
00:12:33.196   10:50:22	-- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:12:33.196   10:50:22	-- nvme/nvme.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0'
00:12:33.196   10:50:22	-- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:12:33.456  EAL: No free 2048 kB hugepages reported on node 1
00:12:40.029   10:50:28	-- nvme/nvme.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0'
00:12:40.029   10:50:28	-- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:12:40.029  EAL: No free 2048 kB hugepages reported on node 1
00:12:46.602   10:50:35	-- nvme/nvme.sh@41 -- # bs=4096
00:12:46.602   10:50:35	-- nvme/nvme.sh@43 -- # fio_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:12:46.602   10:50:35	-- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:12:46.602   10:50:35	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:12:46.602   10:50:35	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:12:46.602   10:50:35	-- common/autotest_common.sh@1328 -- # local sanitizers
00:12:46.602   10:50:35	-- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:12:46.602   10:50:35	-- common/autotest_common.sh@1330 -- # shift
00:12:46.602   10:50:35	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:12:46.602   10:50:35	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # grep libasan
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:12:46.602   10:50:35	-- common/autotest_common.sh@1334 -- # asan_lib=
00:12:46.602   10:50:35	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:12:46.602   10:50:35	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:12:46.602    10:50:35	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:12:46.602   10:50:35	-- common/autotest_common.sh@1334 -- # asan_lib=
00:12:46.602   10:50:35	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:12:46.602   10:50:35	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_nvme'
00:12:46.602   10:50:35	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.5e.00.0' --bs=4096
00:12:46.602  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:12:46.602  fio-3.35
00:12:46.602  Starting 1 thread
00:12:46.602  EAL: No free 2048 kB hugepages reported on node 1
00:12:56.590  
00:12:56.590  test: (groupid=0, jobs=1): err= 0: pid=2140140: Sun Dec 15 10:50:44 2024
00:12:56.590    read: IOPS=56.7k, BW=221MiB/s (232MB/s)(443MiB/2001msec)
00:12:56.590      slat (nsec): min=4488, max=29790, avg=4786.16, stdev=446.50
00:12:56.590      clat (usec): min=208, max=1687, avg=1113.96, stdev=20.66
00:12:56.590       lat (usec): min=213, max=1692, avg=1118.75, stdev=20.69
00:12:56.590      clat percentiles (usec):
00:12:56.590       |  1.00th=[ 1090],  5.00th=[ 1106], 10.00th=[ 1106], 20.00th=[ 1106],
00:12:56.590       | 30.00th=[ 1106], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1123],
00:12:56.590       | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1123], 95.00th=[ 1123],
00:12:56.590       | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1352], 99.95th=[ 1401],
00:12:56.590       | 99.99th=[ 1532]
00:12:56.590     bw (  KiB/s): min=220862, max=229152, per=99.69%, avg=225948.67, stdev=4454.35, samples=3
00:12:56.590     iops        : min=55215, max=57288, avg=56487.00, stdev=1113.87, samples=3
00:12:56.590    write: IOPS=56.5k, BW=221MiB/s (231MB/s)(442MiB/2001msec); 0 zone resets
00:12:56.590      slat (nsec): min=4537, max=128170, avg=4849.43, stdev=679.19
00:12:56.590      clat (usec): min=199, max=1565, avg=1114.31, stdev=19.49
00:12:56.590       lat (usec): min=204, max=1570, avg=1119.16, stdev=19.52
00:12:56.590      clat percentiles (usec):
00:12:56.590       |  1.00th=[ 1090],  5.00th=[ 1106], 10.00th=[ 1106], 20.00th=[ 1106],
00:12:56.590       | 30.00th=[ 1106], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1123],
00:12:56.590       | 70.00th=[ 1123], 80.00th=[ 1123], 90.00th=[ 1123], 95.00th=[ 1123],
00:12:56.590       | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1352], 99.95th=[ 1401],
00:12:56.590       | 99.99th=[ 1532]
00:12:56.590     bw (  KiB/s): min=220638, max=227656, per=99.59%, avg=225106.00, stdev=3882.28, samples=3
00:12:56.590     iops        : min=55159, max=56914, avg=56276.33, stdev=970.86, samples=3
00:12:56.590    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.05%
00:12:56.590    lat (msec)   : 2=99.91%
00:12:56.590    cpu          : usr=99.50%, sys=0.05%, ctx=3, majf=0, minf=5
00:12:56.590    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:12:56.590       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:12:56.590       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:12:56.590       issued rwts: total=113382,113071,0,0 short=0,0,0,0 dropped=0,0,0,0
00:12:56.590       latency   : target=0, window=0, percentile=100.00%, depth=128
00:12:56.590  
00:12:56.590  Run status group 0 (all jobs):
00:12:56.590     READ: bw=221MiB/s (232MB/s), 221MiB/s-221MiB/s (232MB/s-232MB/s), io=443MiB (464MB), run=2001-2001msec
00:12:56.590    WRITE: bw=221MiB/s (231MB/s), 221MiB/s-221MiB/s (231MB/s-231MB/s), io=442MiB (463MB), run=2001-2001msec
00:12:56.590   10:50:44	-- nvme/nvme.sh@44 -- # ran_fio=true
00:12:56.590   10:50:44	-- nvme/nvme.sh@46 -- # true
00:12:56.590  
00:12:56.590  real	0m22.337s
00:12:56.590  user	0m20.692s
00:12:56.590  sys	0m2.327s
00:12:56.590   10:50:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:56.590   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.590  ************************************
00:12:56.590  END TEST nvme_fio
00:12:56.590  ************************************
00:12:56.590  
00:12:56.590  real	1m47.030s
00:12:56.590  user	4m3.608s
00:12:56.590  sys	0m18.251s
00:12:56.590   10:50:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:56.590   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.590  ************************************
00:12:56.590  END TEST nvme
00:12:56.590  ************************************
00:12:56.590   10:50:44	-- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]]
00:12:56.590   10:50:44	-- spdk/autotest.sh@214 -- # run_test nvme_scc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh
00:12:56.590   10:50:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:56.590   10:50:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:56.590   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.590  ************************************
00:12:56.590  START TEST nvme_scc
00:12:56.590  ************************************
00:12:56.590   10:50:44	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_scc.sh
00:12:56.590  * Looking for test storage...
00:12:56.590  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:12:56.590     10:50:44	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:56.590      10:50:44	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:56.590      10:50:44	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:56.590     10:50:44	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:56.590     10:50:44	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:56.590     10:50:44	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:56.590     10:50:44	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:56.590     10:50:44	-- scripts/common.sh@335 -- # IFS=.-:
00:12:56.590     10:50:44	-- scripts/common.sh@335 -- # read -ra ver1
00:12:56.590     10:50:44	-- scripts/common.sh@336 -- # IFS=.-:
00:12:56.590     10:50:44	-- scripts/common.sh@336 -- # read -ra ver2
00:12:56.590     10:50:44	-- scripts/common.sh@337 -- # local 'op=<'
00:12:56.590     10:50:44	-- scripts/common.sh@339 -- # ver1_l=2
00:12:56.590     10:50:44	-- scripts/common.sh@340 -- # ver2_l=1
00:12:56.590     10:50:44	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:56.590     10:50:44	-- scripts/common.sh@343 -- # case "$op" in
00:12:56.590     10:50:44	-- scripts/common.sh@344 -- # : 1
00:12:56.590     10:50:44	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:56.590     10:50:44	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:56.590      10:50:44	-- scripts/common.sh@364 -- # decimal 1
00:12:56.590      10:50:44	-- scripts/common.sh@352 -- # local d=1
00:12:56.590      10:50:44	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:56.590      10:50:44	-- scripts/common.sh@354 -- # echo 1
00:12:56.590     10:50:44	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:56.590      10:50:44	-- scripts/common.sh@365 -- # decimal 2
00:12:56.590      10:50:44	-- scripts/common.sh@352 -- # local d=2
00:12:56.590      10:50:44	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:56.590      10:50:44	-- scripts/common.sh@354 -- # echo 2
00:12:56.590     10:50:44	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:56.590     10:50:44	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:56.590     10:50:44	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:56.590     10:50:44	-- scripts/common.sh@367 -- # return 0
00:12:56.590     10:50:44	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:56.590     10:50:44	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:56.590  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.590  		--rc genhtml_branch_coverage=1
00:12:56.590  		--rc genhtml_function_coverage=1
00:12:56.590  		--rc genhtml_legend=1
00:12:56.590  		--rc geninfo_all_blocks=1
00:12:56.590  		--rc geninfo_unexecuted_blocks=1
00:12:56.590  		
00:12:56.590  		'
00:12:56.590     10:50:44	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:56.590  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.590  		--rc genhtml_branch_coverage=1
00:12:56.590  		--rc genhtml_function_coverage=1
00:12:56.590  		--rc genhtml_legend=1
00:12:56.590  		--rc geninfo_all_blocks=1
00:12:56.590  		--rc geninfo_unexecuted_blocks=1
00:12:56.590  		
00:12:56.590  		'
00:12:56.590     10:50:44	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:56.590  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.590  		--rc genhtml_branch_coverage=1
00:12:56.590  		--rc genhtml_function_coverage=1
00:12:56.590  		--rc genhtml_legend=1
00:12:56.590  		--rc geninfo_all_blocks=1
00:12:56.590  		--rc geninfo_unexecuted_blocks=1
00:12:56.590  		
00:12:56.590  		'
00:12:56.590     10:50:44	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:56.590  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.590  		--rc genhtml_branch_coverage=1
00:12:56.590  		--rc genhtml_function_coverage=1
00:12:56.590  		--rc genhtml_legend=1
00:12:56.590  		--rc geninfo_all_blocks=1
00:12:56.590  		--rc geninfo_unexecuted_blocks=1
00:12:56.590  		
00:12:56.590  		'
00:12:56.590    10:50:44	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:12:56.590       10:50:44	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:12:56.590      10:50:44	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:12:56.590     10:50:44	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:12:56.590     10:50:44	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:12:56.590      10:50:44	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:56.590      10:50:44	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:56.590      10:50:44	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:56.590       10:50:44	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:56.590       10:50:44	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:56.590       10:50:44	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:56.590       10:50:44	-- paths/export.sh@5 -- # export PATH
00:12:56.590       10:50:44	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:56.590     10:50:44	-- nvme/functions.sh@10 -- # ctrls=()
00:12:56.590     10:50:44	-- nvme/functions.sh@10 -- # declare -A ctrls
00:12:56.590     10:50:44	-- nvme/functions.sh@11 -- # nvmes=()
00:12:56.590     10:50:44	-- nvme/functions.sh@11 -- # declare -A nvmes
00:12:56.590     10:50:44	-- nvme/functions.sh@12 -- # bdfs=()
00:12:56.590     10:50:44	-- nvme/functions.sh@12 -- # declare -A bdfs
00:12:56.590     10:50:44	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:12:56.590     10:50:44	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:12:56.590     10:50:44	-- nvme/functions.sh@14 -- # nvme_name=
00:12:56.590    10:50:44	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:12:56.590    10:50:44	-- nvme/nvme_scc.sh@12 -- # uname
00:12:56.590   10:50:44	-- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:12:56.590   10:50:44	-- nvme/nvme_scc.sh@12 -- # [[ ............................... == QEMU ]]
00:12:56.590   10:50:44	-- nvme/nvme_scc.sh@12 -- # exit 0
00:12:56.590  
00:12:56.590  real	0m0.220s
00:12:56.590  user	0m0.113s
00:12:56.590  sys	0m0.126s
00:12:56.590   10:50:44	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:12:56.590   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.590  ************************************
00:12:56.590  END TEST nvme_scc
00:12:56.590  ************************************
00:12:56.590   10:50:44	-- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]]
00:12:56.590   10:50:44	-- spdk/autotest.sh@219 -- # [[ 1 -eq 1 ]]
00:12:56.590   10:50:44	-- spdk/autotest.sh@220 -- # run_test nvme_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh
00:12:56.590   10:50:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:56.590   10:50:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:56.590   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.590  ************************************
00:12:56.590  START TEST nvme_cuse
00:12:56.590  ************************************
00:12:56.590   10:50:44	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse.sh
00:12:56.590  * Looking for test storage...
00:12:56.590  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:12:56.590    10:50:44	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:12:56.590     10:50:44	-- common/autotest_common.sh@1690 -- # lcov --version
00:12:56.590     10:50:44	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:12:56.590    10:50:44	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:12:56.590    10:50:44	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:12:56.590    10:50:44	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:12:56.590    10:50:44	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:12:56.590    10:50:44	-- scripts/common.sh@335 -- # IFS=.-:
00:12:56.590    10:50:44	-- scripts/common.sh@335 -- # read -ra ver1
00:12:56.590    10:50:44	-- scripts/common.sh@336 -- # IFS=.-:
00:12:56.590    10:50:44	-- scripts/common.sh@336 -- # read -ra ver2
00:12:56.590    10:50:44	-- scripts/common.sh@337 -- # local 'op=<'
00:12:56.590    10:50:44	-- scripts/common.sh@339 -- # ver1_l=2
00:12:56.590    10:50:44	-- scripts/common.sh@340 -- # ver2_l=1
00:12:56.590    10:50:44	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:12:56.590    10:50:44	-- scripts/common.sh@343 -- # case "$op" in
00:12:56.590    10:50:44	-- scripts/common.sh@344 -- # : 1
00:12:56.590    10:50:44	-- scripts/common.sh@363 -- # (( v = 0 ))
00:12:56.590    10:50:44	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:56.591     10:50:44	-- scripts/common.sh@364 -- # decimal 1
00:12:56.591     10:50:44	-- scripts/common.sh@352 -- # local d=1
00:12:56.591     10:50:44	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:56.591     10:50:44	-- scripts/common.sh@354 -- # echo 1
00:12:56.591    10:50:44	-- scripts/common.sh@364 -- # ver1[v]=1
00:12:56.591     10:50:44	-- scripts/common.sh@365 -- # decimal 2
00:12:56.591     10:50:44	-- scripts/common.sh@352 -- # local d=2
00:12:56.591     10:50:44	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:56.591     10:50:44	-- scripts/common.sh@354 -- # echo 2
00:12:56.591    10:50:44	-- scripts/common.sh@365 -- # ver2[v]=2
00:12:56.591    10:50:44	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:12:56.591    10:50:44	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:12:56.591    10:50:44	-- scripts/common.sh@367 -- # return 0
00:12:56.591    10:50:44	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:56.591    10:50:44	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:12:56.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.591  		--rc genhtml_branch_coverage=1
00:12:56.591  		--rc genhtml_function_coverage=1
00:12:56.591  		--rc genhtml_legend=1
00:12:56.591  		--rc geninfo_all_blocks=1
00:12:56.591  		--rc geninfo_unexecuted_blocks=1
00:12:56.591  		
00:12:56.591  		'
00:12:56.591    10:50:44	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:12:56.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.591  		--rc genhtml_branch_coverage=1
00:12:56.591  		--rc genhtml_function_coverage=1
00:12:56.591  		--rc genhtml_legend=1
00:12:56.591  		--rc geninfo_all_blocks=1
00:12:56.591  		--rc geninfo_unexecuted_blocks=1
00:12:56.591  		
00:12:56.591  		'
00:12:56.591    10:50:44	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:12:56.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.591  		--rc genhtml_branch_coverage=1
00:12:56.591  		--rc genhtml_function_coverage=1
00:12:56.591  		--rc genhtml_legend=1
00:12:56.591  		--rc geninfo_all_blocks=1
00:12:56.591  		--rc geninfo_unexecuted_blocks=1
00:12:56.591  		
00:12:56.591  		'
00:12:56.591    10:50:44	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:12:56.591  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.591  		--rc genhtml_branch_coverage=1
00:12:56.591  		--rc genhtml_function_coverage=1
00:12:56.591  		--rc genhtml_legend=1
00:12:56.591  		--rc geninfo_all_blocks=1
00:12:56.591  		--rc geninfo_unexecuted_blocks=1
00:12:56.591  		
00:12:56.591  		'
00:12:56.591    10:50:44	-- cuse/nvme_cuse.sh@11 -- # uname
00:12:56.591   10:50:44	-- cuse/nvme_cuse.sh@11 -- # [[ Linux != \L\i\n\u\x ]]
00:12:56.591   10:50:44	-- cuse/nvme_cuse.sh@16 -- # modprobe cuse
00:12:56.591   10:50:44	-- cuse/nvme_cuse.sh@17 -- # run_test nvme_cuse_app /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse
00:12:56.591   10:50:44	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:12:56.591   10:50:44	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:12:56.591   10:50:44	-- common/autotest_common.sh@10 -- # set +x
00:12:56.591  ************************************
00:12:56.591  START TEST nvme_cuse_app
00:12:56.591  ************************************
00:12:56.591   10:50:44	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/cuse
00:12:56.591  
00:12:56.591  
00:12:56.591       CUnit - A unit testing framework for C - Version 2.1-3
00:12:56.591       http://cunit.sourceforge.net/
00:12:56.591  
00:12:56.591  
00:12:56.591  Suite: nvme_cuse
00:13:08.808    Test: test_cuse_update ...passed
00:13:08.808  
00:13:08.808  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:13:08.808                suites      1      1    n/a      0        0
00:13:08.808                 tests      1      1      1      0        0
00:13:08.808               asserts    925    925    925      0      n/a
00:13:08.808  
00:13:08.808  Elapsed time =    0.023 seconds
00:13:08.808  
00:13:08.808  real	0m12.025s
00:13:08.808  user	0m0.011s
00:13:08.808  sys	0m0.021s
00:13:08.808   10:50:56	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:08.808   10:50:56	-- common/autotest_common.sh@10 -- # set +x
00:13:08.808  ************************************
00:13:08.808  END TEST nvme_cuse_app
00:13:08.808  ************************************
00:13:08.808   10:50:56	-- cuse/nvme_cuse.sh@18 -- # run_test nvme_cuse_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh
00:13:08.808   10:50:56	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:08.808   10:50:56	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:08.808   10:50:56	-- common/autotest_common.sh@10 -- # set +x
00:13:08.808  ************************************
00:13:08.808  START TEST nvme_cuse_rpc
00:13:08.808  ************************************
00:13:08.808   10:50:56	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_cuse_rpc.sh
00:13:08.808  * Looking for test storage...
00:13:08.808  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:13:08.808    10:50:57	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:08.808     10:50:57	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:08.808     10:50:57	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:08.808    10:50:57	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:08.808    10:50:57	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:08.808    10:50:57	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:08.808    10:50:57	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:08.808    10:50:57	-- scripts/common.sh@335 -- # IFS=.-:
00:13:08.808    10:50:57	-- scripts/common.sh@335 -- # read -ra ver1
00:13:08.808    10:50:57	-- scripts/common.sh@336 -- # IFS=.-:
00:13:08.808    10:50:57	-- scripts/common.sh@336 -- # read -ra ver2
00:13:08.808    10:50:57	-- scripts/common.sh@337 -- # local 'op=<'
00:13:08.808    10:50:57	-- scripts/common.sh@339 -- # ver1_l=2
00:13:08.808    10:50:57	-- scripts/common.sh@340 -- # ver2_l=1
00:13:08.808    10:50:57	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:08.808    10:50:57	-- scripts/common.sh@343 -- # case "$op" in
00:13:08.808    10:50:57	-- scripts/common.sh@344 -- # : 1
00:13:08.808    10:50:57	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:08.808    10:50:57	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:08.808     10:50:57	-- scripts/common.sh@364 -- # decimal 1
00:13:08.808     10:50:57	-- scripts/common.sh@352 -- # local d=1
00:13:08.808     10:50:57	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:08.808     10:50:57	-- scripts/common.sh@354 -- # echo 1
00:13:08.808    10:50:57	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:08.808     10:50:57	-- scripts/common.sh@365 -- # decimal 2
00:13:08.808     10:50:57	-- scripts/common.sh@352 -- # local d=2
00:13:08.808     10:50:57	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:08.808     10:50:57	-- scripts/common.sh@354 -- # echo 2
00:13:08.808    10:50:57	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:08.808    10:50:57	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:08.808    10:50:57	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:08.808    10:50:57	-- scripts/common.sh@367 -- # return 0
00:13:08.808    10:50:57	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:08.808    10:50:57	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:08.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:08.808  		--rc genhtml_branch_coverage=1
00:13:08.808  		--rc genhtml_function_coverage=1
00:13:08.808  		--rc genhtml_legend=1
00:13:08.808  		--rc geninfo_all_blocks=1
00:13:08.808  		--rc geninfo_unexecuted_blocks=1
00:13:08.808  		
00:13:08.808  		'
00:13:08.808    10:50:57	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:08.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:08.808  		--rc genhtml_branch_coverage=1
00:13:08.808  		--rc genhtml_function_coverage=1
00:13:08.808  		--rc genhtml_legend=1
00:13:08.808  		--rc geninfo_all_blocks=1
00:13:08.808  		--rc geninfo_unexecuted_blocks=1
00:13:08.808  		
00:13:08.808  		'
00:13:08.808    10:50:57	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:08.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:08.808  		--rc genhtml_branch_coverage=1
00:13:08.808  		--rc genhtml_function_coverage=1
00:13:08.808  		--rc genhtml_legend=1
00:13:08.808  		--rc geninfo_all_blocks=1
00:13:08.808  		--rc geninfo_unexecuted_blocks=1
00:13:08.808  		
00:13:08.808  		'
00:13:08.808    10:50:57	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:08.808  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:08.808  		--rc genhtml_branch_coverage=1
00:13:08.808  		--rc genhtml_function_coverage=1
00:13:08.808  		--rc genhtml_legend=1
00:13:08.808  		--rc geninfo_all_blocks=1
00:13:08.808  		--rc geninfo_unexecuted_blocks=1
00:13:08.808  		
00:13:08.808  		'
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:08.808    10:50:57	-- cuse/nvme_cuse_rpc.sh@13 -- # get_first_nvme_bdf
00:13:08.808    10:50:57	-- common/autotest_common.sh@1519 -- # bdfs=()
00:13:08.808    10:50:57	-- common/autotest_common.sh@1519 -- # local bdfs
00:13:08.808    10:50:57	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:13:08.808     10:50:57	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:13:08.808     10:50:57	-- common/autotest_common.sh@1508 -- # bdfs=()
00:13:08.808     10:50:57	-- common/autotest_common.sh@1508 -- # local bdfs
00:13:08.808     10:50:57	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:13:08.808      10:50:57	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:13:08.808      10:50:57	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:13:08.808     10:50:57	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:13:08.808     10:50:57	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:13:08.808    10:50:57	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@13 -- # bdf=0000:5e:00.0
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@14 -- # ctrlr_base=/dev/spdk/nvme
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@17 -- # spdk_tgt_pid=2142855
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@16 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@18 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:13:08.808   10:50:57	-- cuse/nvme_cuse_rpc.sh@20 -- # waitforlisten 2142855
00:13:08.808   10:50:57	-- common/autotest_common.sh@829 -- # '[' -z 2142855 ']'
00:13:08.808   10:50:57	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:08.808   10:50:57	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:08.808   10:50:57	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:08.808  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:08.808   10:50:57	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:08.808   10:50:57	-- common/autotest_common.sh@10 -- # set +x
00:13:08.808  [2024-12-15 10:50:57.352162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:08.808  [2024-12-15 10:50:57.352233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142855 ]
00:13:08.808  EAL: No free 2048 kB hugepages reported on node 1
00:13:08.808  [2024-12-15 10:50:57.459732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:08.808  [2024-12-15 10:50:57.565339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:08.808  [2024-12-15 10:50:57.565522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:08.808  [2024-12-15 10:50:57.565527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:08.809  [2024-12-15 10:50:57.774313] 'OCF_Core' volume operations registered
00:13:08.809  [2024-12-15 10:50:57.777808] 'OCF_Cache' volume operations registered
00:13:08.809  [2024-12-15 10:50:57.781832] 'OCF Composite' volume operations registered
00:13:08.809  [2024-12-15 10:50:57.785318] 'SPDK_block_device' volume operations registered
00:13:09.378   10:50:58	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:09.378   10:50:58	-- common/autotest_common.sh@862 -- # return 0
00:13:09.378   10:50:58	-- cuse/nvme_cuse_rpc.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:13:12.671  Nvme0n1
00:13:12.671   10:51:01	-- cuse/nvme_cuse_rpc.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:13:12.671  [2024-12-15 10:51:01.652518] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:13:12.671  [2024-12-15 10:51:01.652697] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:13:12.672  [2024-12-15 10:51:01.652825] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:13:12.672   10:51:01	-- cuse/nvme_cuse_rpc.sh@25 -- # sleep 5
00:13:17.949   10:51:06	-- cuse/nvme_cuse_rpc.sh@27 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:13:17.949   10:51:06	-- cuse/nvme_cuse_rpc.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:13:17.949  [
00:13:17.949    {
00:13:17.949      "name": "Nvme0n1",
00:13:17.949      "aliases": [
00:13:17.949        "1ef43d82-60f2-4600-99dc-4f089efdbbcf"
00:13:17.949      ],
00:13:17.949      "product_name": "NVMe disk",
00:13:17.949      "block_size": 512,
00:13:17.949      "num_blocks": 7814037168,
00:13:17.949      "uuid": "1ef43d82-60f2-4600-99dc-4f089efdbbcf",
00:13:17.949      "assigned_rate_limits": {
00:13:17.949        "rw_ios_per_sec": 0,
00:13:17.949        "rw_mbytes_per_sec": 0,
00:13:17.949        "r_mbytes_per_sec": 0,
00:13:17.949        "w_mbytes_per_sec": 0
00:13:17.949      },
00:13:17.949      "claimed": false,
00:13:17.949      "zoned": false,
00:13:17.949      "supported_io_types": {
00:13:17.949        "read": true,
00:13:17.949        "write": true,
00:13:17.949        "unmap": true,
00:13:17.949        "write_zeroes": true,
00:13:17.949        "flush": true,
00:13:17.949        "reset": true,
00:13:17.949        "compare": false,
00:13:17.949        "compare_and_write": false,
00:13:17.949        "abort": true,
00:13:17.949        "nvme_admin": true,
00:13:17.949        "nvme_io": true
00:13:17.949      },
00:13:17.949      "driver_specific": {
00:13:17.949        "nvme": [
00:13:17.949          {
00:13:17.949            "pci_address": "0000:5e:00.0",
00:13:17.949            "trid": {
00:13:17.949              "trtype": "PCIe",
00:13:17.949              "traddr": "0000:5e:00.0"
00:13:17.949            },
00:13:17.949            "cuse_device": "spdk/nvme0n1",
00:13:17.949            "ctrlr_data": {
00:13:17.949              "cntlid": 0,
00:13:17.949              "vendor_id": "0x8086",
00:13:17.949              "model_number": "INTEL SSDPE2KX040T8",
00:13:17.949              "serial_number": "BTLJ83030AK84P0DGN",
00:13:17.949              "firmware_revision": "VDV10184",
00:13:17.949              "oacs": {
00:13:17.949                "security": 0,
00:13:17.949                "format": 1,
00:13:17.949                "firmware": 1,
00:13:17.949                "ns_manage": 1
00:13:17.949              },
00:13:17.949              "multi_ctrlr": false,
00:13:17.949              "ana_reporting": false
00:13:17.949            },
00:13:17.949            "vs": {
00:13:17.949              "nvme_version": "1.2"
00:13:17.949            },
00:13:17.949            "ns_data": {
00:13:17.949              "id": 1,
00:13:17.949              "can_share": false
00:13:17.949            }
00:13:17.949          }
00:13:17.949        ],
00:13:17.949        "mp_policy": "active_passive"
00:13:17.949      }
00:13:17.949    }
00:13:17.949  ]
00:13:17.949   10:51:06	-- cuse/nvme_cuse_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:13:18.208  [
00:13:18.209    {
00:13:18.209      "name": "Nvme0",
00:13:18.209      "ctrlrs": [
00:13:18.209        {
00:13:18.209          "state": "enabled",
00:13:18.209          "cuse_device": "spdk/nvme0",
00:13:18.209          "trid": {
00:13:18.209            "trtype": "PCIe",
00:13:18.209            "traddr": "0000:5e:00.0"
00:13:18.209          },
00:13:18.209          "cntlid": 0,
00:13:18.209          "host": {
00:13:18.209            "nqn": "nqn.2014-08.org.nvmexpress:uuid:3e7d074e-f2bd-46ab-be11-57b1d13b0775",
00:13:18.209            "addr": "",
00:13:18.209            "svcid": ""
00:13:18.209          }
00:13:18.209        }
00:13:18.209      ]
00:13:18.209    }
00:13:18.209  ]
00:13:18.209   10:51:07	-- cuse/nvme_cuse_rpc.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0
00:13:18.800   10:51:07	-- cuse/nvme_cuse_rpc.sh@35 -- # sleep 1
00:13:19.792   10:51:08	-- cuse/nvme_cuse_rpc.sh@36 -- # '[' -c /dev/spdk/nvme0 ']'
00:13:19.792   10:51:08	-- cuse/nvme_cuse_rpc.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_unregister -n Nvme0
00:13:20.051  [2024-12-15 10:51:08.926291] nvme_cuse.c:1343:spdk_nvme_cuse_unregister: *ERROR*: Cannot find associated CUSE device
00:13:20.051  request:
00:13:20.051  {
00:13:20.051    "name": "Nvme0",
00:13:20.051    "method": "bdev_nvme_cuse_unregister",
00:13:20.051    "req_id": 1
00:13:20.051  }
00:13:20.051  Got JSON-RPC error response
00:13:20.051  response:
00:13:20.051  {
00:13:20.051    "code": -19,
00:13:20.051    "message": "No such device"
00:13:20.051  }
00:13:20.051   10:51:08	-- cuse/nvme_cuse_rpc.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:13:20.310  [2024-12-15 10:51:09.125029] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:13:20.310  [2024-12-15 10:51:09.125160] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:13:20.310  [2024-12-15 10:51:09.125230] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:13:20.310   10:51:09	-- cuse/nvme_cuse_rpc.sh@44 -- # sleep 1
00:13:21.248   10:51:10	-- cuse/nvme_cuse_rpc.sh@46 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:13:21.248   10:51:10	-- cuse/nvme_cuse_rpc.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:13:21.508  [2024-12-15 10:51:10.391602] bdev_nvme_cuse_rpc.c:  57:rpc_nvme_cuse_register: *ERROR*: Failed to register CUSE devices: File exists
00:13:21.508  request:
00:13:21.508  {
00:13:21.508    "name": "Nvme0",
00:13:21.508    "method": "bdev_nvme_cuse_register",
00:13:21.508    "req_id": 1
00:13:21.508  }
00:13:21.508  Got JSON-RPC error response
00:13:21.508  response:
00:13:21.508  {
00:13:21.508    "code": -17,
00:13:21.508    "message": "File exists"
00:13:21.508  }
00:13:21.508   10:51:10	-- cuse/nvme_cuse_rpc.sh@52 -- # sleep 1
00:13:22.445   10:51:11	-- cuse/nvme_cuse_rpc.sh@54 -- # '[' -c /dev/spdk/nvme1 ']'
00:13:22.445   10:51:11	-- cuse/nvme_cuse_rpc.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:13:27.750   10:51:16	-- cuse/nvme_cuse_rpc.sh@60 -- # trap - SIGINT SIGTERM EXIT
00:13:27.750   10:51:16	-- cuse/nvme_cuse_rpc.sh@61 -- # killprocess 2142855
00:13:27.750   10:51:16	-- common/autotest_common.sh@936 -- # '[' -z 2142855 ']'
00:13:27.750   10:51:16	-- common/autotest_common.sh@940 -- # kill -0 2142855
00:13:27.750    10:51:16	-- common/autotest_common.sh@941 -- # uname
00:13:27.750   10:51:16	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:27.750    10:51:16	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2142855
00:13:27.750   10:51:16	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:27.750   10:51:16	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:27.750   10:51:16	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2142855'
00:13:27.750  killing process with pid 2142855
00:13:27.750   10:51:16	-- common/autotest_common.sh@955 -- # kill 2142855
00:13:27.750   10:51:16	-- common/autotest_common.sh@960 -- # wait 2142855
00:13:28.008  
00:13:28.008  real	0m19.936s
00:13:28.008  user	0m38.753s
00:13:28.008  sys	0m1.161s
00:13:28.008   10:51:16	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:28.008   10:51:16	-- common/autotest_common.sh@10 -- # set +x
00:13:28.008  ************************************
00:13:28.008  END TEST nvme_cuse_rpc
00:13:28.008  ************************************
00:13:28.008   10:51:16	-- cuse/nvme_cuse.sh@19 -- # run_test nvme_cli_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh
00:13:28.008   10:51:16	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:28.009   10:51:16	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:28.009   10:51:16	-- common/autotest_common.sh@10 -- # set +x
00:13:28.009  ************************************
00:13:28.009  START TEST nvme_cli_cuse
00:13:28.009  ************************************
00:13:28.009   10:51:16	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_cuse.sh
00:13:28.268  * Looking for test storage...
00:13:28.268  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:13:28.268     10:51:17	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:28.268      10:51:17	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:28.268      10:51:17	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:28.268     10:51:17	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:28.268     10:51:17	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:28.268     10:51:17	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:28.268     10:51:17	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:28.268     10:51:17	-- scripts/common.sh@335 -- # IFS=.-:
00:13:28.268     10:51:17	-- scripts/common.sh@335 -- # read -ra ver1
00:13:28.268     10:51:17	-- scripts/common.sh@336 -- # IFS=.-:
00:13:28.268     10:51:17	-- scripts/common.sh@336 -- # read -ra ver2
00:13:28.268     10:51:17	-- scripts/common.sh@337 -- # local 'op=<'
00:13:28.268     10:51:17	-- scripts/common.sh@339 -- # ver1_l=2
00:13:28.268     10:51:17	-- scripts/common.sh@340 -- # ver2_l=1
00:13:28.268     10:51:17	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:28.268     10:51:17	-- scripts/common.sh@343 -- # case "$op" in
00:13:28.268     10:51:17	-- scripts/common.sh@344 -- # : 1
00:13:28.268     10:51:17	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:28.268     10:51:17	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:28.268      10:51:17	-- scripts/common.sh@364 -- # decimal 1
00:13:28.268      10:51:17	-- scripts/common.sh@352 -- # local d=1
00:13:28.268      10:51:17	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:28.268      10:51:17	-- scripts/common.sh@354 -- # echo 1
00:13:28.268     10:51:17	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:28.268      10:51:17	-- scripts/common.sh@365 -- # decimal 2
00:13:28.268      10:51:17	-- scripts/common.sh@352 -- # local d=2
00:13:28.268      10:51:17	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:28.268      10:51:17	-- scripts/common.sh@354 -- # echo 2
00:13:28.268     10:51:17	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:28.268     10:51:17	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:28.268     10:51:17	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:28.268     10:51:17	-- scripts/common.sh@367 -- # return 0
00:13:28.268     10:51:17	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:28.268     10:51:17	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:28.268  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.268  		--rc genhtml_branch_coverage=1
00:13:28.268  		--rc genhtml_function_coverage=1
00:13:28.268  		--rc genhtml_legend=1
00:13:28.268  		--rc geninfo_all_blocks=1
00:13:28.268  		--rc geninfo_unexecuted_blocks=1
00:13:28.268  		
00:13:28.268  		'
00:13:28.268     10:51:17	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:28.268  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.268  		--rc genhtml_branch_coverage=1
00:13:28.268  		--rc genhtml_function_coverage=1
00:13:28.268  		--rc genhtml_legend=1
00:13:28.268  		--rc geninfo_all_blocks=1
00:13:28.268  		--rc geninfo_unexecuted_blocks=1
00:13:28.268  		
00:13:28.268  		'
00:13:28.268     10:51:17	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:28.268  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.268  		--rc genhtml_branch_coverage=1
00:13:28.268  		--rc genhtml_function_coverage=1
00:13:28.268  		--rc genhtml_legend=1
00:13:28.269  		--rc geninfo_all_blocks=1
00:13:28.269  		--rc geninfo_unexecuted_blocks=1
00:13:28.269  		
00:13:28.269  		'
00:13:28.269     10:51:17	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:28.269  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:28.269  		--rc genhtml_branch_coverage=1
00:13:28.269  		--rc genhtml_function_coverage=1
00:13:28.269  		--rc genhtml_legend=1
00:13:28.269  		--rc geninfo_all_blocks=1
00:13:28.269  		--rc geninfo_unexecuted_blocks=1
00:13:28.269  		
00:13:28.269  		'
00:13:28.269    10:51:17	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:28.269       10:51:17	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:28.269      10:51:17	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:13:28.269     10:51:17	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:13:28.269     10:51:17	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:13:28.269      10:51:17	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:28.269      10:51:17	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:28.269      10:51:17	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:28.269       10:51:17	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.269       10:51:17	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.269       10:51:17	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.269       10:51:17	-- paths/export.sh@5 -- # export PATH
00:13:28.269       10:51:17	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:28.269     10:51:17	-- nvme/functions.sh@10 -- # ctrls=()
00:13:28.269     10:51:17	-- nvme/functions.sh@10 -- # declare -A ctrls
00:13:28.269     10:51:17	-- nvme/functions.sh@11 -- # nvmes=()
00:13:28.269     10:51:17	-- nvme/functions.sh@11 -- # declare -A nvmes
00:13:28.269     10:51:17	-- nvme/functions.sh@12 -- # bdfs=()
00:13:28.269     10:51:17	-- nvme/functions.sh@12 -- # declare -A bdfs
00:13:28.269     10:51:17	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:13:28.269     10:51:17	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:13:28.269     10:51:17	-- nvme/functions.sh@14 -- # nvme_name=
00:13:28.269    10:51:17	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@10 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@11 -- # mkdir /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@13 -- # KERNEL_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@14 -- # CUSE_OUT=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@16 -- # NVME_CMD=/usr/local/src/nvme-cli/nvme
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@17 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:28.269   10:51:17	-- cuse/spdk_nvme_cli_cuse.sh@19 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:13:31.560  Waiting for block devices as requested
00:13:31.560  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:13:31.560  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:13:31.560  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:13:31.819  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:13:31.819  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:13:31.820  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:13:32.079  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:13:32.079  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:13:32.079  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:13:32.339  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:13:32.339  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:13:32.339  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:13:32.339   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@20 -- # scan_nvme_ctrls
00:13:32.339   10:51:21	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:13:32.339   10:51:21	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:13:32.339   10:51:21	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:13:32.339   10:51:21	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:13:32.339   10:51:21	-- scripts/common.sh@15 -- # local i
00:13:32.339   10:51:21	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:13:32.339   10:51:21	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:13:32.339   10:51:21	-- scripts/common.sh@24 -- # return 0
00:13:32.339   10:51:21	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:13:32.339   10:51:21	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:13:32.339   10:51:21	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@18 -- # shift
00:13:32.339   10:51:21	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339    10:51:21	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.339   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:13:32.339   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:13:32.339    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:13:32.339   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.340   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.340   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:13:32.340    10:51:21	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.340   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.341   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.341   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:13:32.341    10:51:21	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:13:32.341   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.341   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.341   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.341   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:13:32.341    10:51:21	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:13:32.341   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.341   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.341   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.341   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:13:32.341    10:51:21	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.601   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.601   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.601   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:13:32.601    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:13:32.602   10:51:21	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:13:32.602   10:51:21	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:13:32.602   10:51:21	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:13:32.602   10:51:21	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@18 -- # shift
00:13:32.602   10:51:21	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602    10:51:21	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.602   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.602   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:13:32.602    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.602   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  010000009f6e00000000000000000000 ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000009f6e00000000000000000000"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000009f6e00000000000000000000
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  0000000000009f6e ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000009f6e"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000009f6e
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:13:32.603   10:51:21	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:13:32.603    10:51:21	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # IFS=:
00:13:32.603   10:51:21	-- nvme/functions.sh@21 -- # read -r reg val
00:13:32.603   10:51:21	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:13:32.603   10:51:21	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:13:32.603   10:51:21	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:13:32.603   10:51:21	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:13:32.603   10:51:21	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:13:32.603   10:51:21	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:13:32.603    10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@22 -- # get_nvme_with_ns_management
00:13:32.603    10:51:21	-- nvme/functions.sh@153 -- # local _ctrls
00:13:32.603    10:51:21	-- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management))
00:13:32.603     10:51:21	-- nvme/functions.sh@155 -- # get_nvmes_with_ns_management
00:13:32.603     10:51:21	-- nvme/functions.sh@144 -- # (( 1 == 0 ))
00:13:32.603     10:51:21	-- nvme/functions.sh@146 -- # local ctrl
00:13:32.603     10:51:21	-- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}"
00:13:32.603     10:51:21	-- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt
00:13:32.603     10:51:21	-- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt
00:13:32.603     10:51:21	-- nvme/functions.sh@122 -- # local -A bits
00:13:32.603     10:51:21	-- nvme/functions.sh@125 -- # bits["ss/sr"]=1
00:13:32.603     10:51:21	-- nvme/functions.sh@126 -- # bits["fnvme"]=2
00:13:32.603     10:51:21	-- nvme/functions.sh@127 -- # bits["fc/fi"]=4
00:13:32.603     10:51:21	-- nvme/functions.sh@128 -- # bits["nsmgt"]=8
00:13:32.603     10:51:21	-- nvme/functions.sh@129 -- # bits["self-test"]=16
00:13:32.603     10:51:21	-- nvme/functions.sh@130 -- # bits["directives"]=32
00:13:32.603     10:51:21	-- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64
00:13:32.603     10:51:21	-- nvme/functions.sh@132 -- # bits["virtmgt"]=128
00:13:32.603     10:51:21	-- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256
00:13:32.603     10:51:21	-- nvme/functions.sh@134 -- # bits["getlba"]=512
00:13:32.603     10:51:21	-- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024
00:13:32.603     10:51:21	-- nvme/functions.sh@137 -- # bit=nsmgt
00:13:32.603     10:51:21	-- nvme/functions.sh@138 -- # [[ -n 8 ]]
00:13:32.603      10:51:21	-- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs
00:13:32.603      10:51:21	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs
00:13:32.604      10:51:21	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:13:32.604      10:51:21	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:13:32.604      10:51:21	-- nvme/functions.sh@75 -- # [[ -n 0xe ]]
00:13:32.604      10:51:21	-- nvme/functions.sh@76 -- # echo 0xe
00:13:32.604     10:51:21	-- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] ))
00:13:32.604     10:51:21	-- nvme/functions.sh@148 -- # echo nvme0
00:13:32.604    10:51:21	-- nvme/functions.sh@156 -- # (( 1 > 0 ))
00:13:32.604    10:51:21	-- nvme/functions.sh@157 -- # echo nvme0
00:13:32.604    10:51:21	-- nvme/functions.sh@158 -- # return 0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@22 -- # nvme_name=nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@27 -- # sel_cmd=()
00:13:32.604    10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@29 -- # get_oncs nvme0
00:13:32.604    10:51:21	-- nvme/functions.sh@169 -- # local ctrl=nvme0
00:13:32.604    10:51:21	-- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs
00:13:32.604    10:51:21	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:13:32.604    10:51:21	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:13:32.604    10:51:21	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:13:32.604    10:51:21	-- nvme/functions.sh@75 -- # [[ -n 0x6 ]]
00:13:32.604    10:51:21	-- nvme/functions.sh@76 -- # echo 0x6
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@29 -- # (( 0x6 & 1 << 4 ))
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@33 -- # ctrlr=/dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@34 -- # ns=/dev/nvme0n1
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@35 -- # bdf=0000:5e:00.0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@37 -- # waitforblk nvme0n1
00:13:32.604   10:51:21	-- common/autotest_common.sh@1224 -- # local i=0
00:13:32.604   10:51:21	-- common/autotest_common.sh@1225 -- # lsblk -l -o NAME
00:13:32.604   10:51:21	-- common/autotest_common.sh@1225 -- # grep -q -w nvme0n1
00:13:32.604   10:51:21	-- common/autotest_common.sh@1231 -- # lsblk -l -o NAME
00:13:32.604   10:51:21	-- common/autotest_common.sh@1231 -- # grep -q -w nvme0n1
00:13:32.604   10:51:21	-- common/autotest_common.sh@1235 -- # return 0
00:13:32.604    10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:13:32.604    10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # grep oacs
00:13:32.604    10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # cut -d: -f2
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@39 -- # oacs=' 0xe'
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@40 -- # oacs_firmware=4
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/nvme0n1
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@43 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@44 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/nvme0n1
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@46 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@47 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@48 -- # '[' 4 -ne 0 ']'
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@49 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@51 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/nvme0
00:13:32.604  Smart Log for NVME device:nvme0 namespace-id:ffffffff
00:13:32.604  critical_warning			: 0
00:13:32.604  temperature				: 37 °C (310 K)
00:13:32.604  available_spare				: 99%
00:13:32.604  available_spare_threshold		: 10%
00:13:32.604  percentage_used				: 32%
00:13:32.604  endurance group critical warning summary: 0
00:13:32.604  Data Units Read				: 628,379,968 (321.73 TB)
00:13:32.604  Data Units Written			: 790,799,418 (404.89 TB)
00:13:32.604  host_read_commands			: 36,986,167,448
00:13:32.604  host_write_commands			: 42,949,937,724
00:13:32.604  controller_busy_time			: 3,917
00:13:32.604  power_cycles				: 31
00:13:32.604  power_on_hours				: 20,842
00:13:32.604  unsafe_shutdowns			: 46
00:13:32.604  media_errors				: 0
00:13:32.604  num_err_log_entries			: 38,669
00:13:32.604  Warning Temperature Time		: 2198
00:13:32.604  Critical Composite Temperature Time	: 0
00:13:32.604  Thermal Management T1 Trans Count	: 0
00:13:32.604  Thermal Management T2 Trans Count	: 0
00:13:32.604  Thermal Management T1 Total Time	: 0
00:13:32.604  Thermal Management T2 Total Time	: 0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@52 -- # /usr/local/src/nvme-cli/nvme error-log /dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@53 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/nvme0 -f 1 -l 100
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@54 -- # /usr/local/src/nvme-cli/nvme get-log /dev/nvme0 -i 1 -l 100
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@59 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/nvme0 -n 1 -f 2 -v 0
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@59 -- # true
00:13:32.604   10:51:21	-- cuse/spdk_nvme_cli_cuse.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:13:35.897  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:13:35.897  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:13:36.157  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:13:36.157  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:13:36.157  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:13:39.448  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:13:39.448   10:51:28	-- cuse/spdk_nvme_cli_cuse.sh@64 -- # spdk_tgt_pid=2148869
00:13:39.448   10:51:28	-- cuse/spdk_nvme_cli_cuse.sh@65 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:13:39.448   10:51:28	-- cuse/spdk_nvme_cli_cuse.sh@67 -- # waitforlisten 2148869
00:13:39.448   10:51:28	-- cuse/spdk_nvme_cli_cuse.sh@63 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:13:39.448   10:51:28	-- common/autotest_common.sh@829 -- # '[' -z 2148869 ']'
00:13:39.448   10:51:28	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:39.448   10:51:28	-- common/autotest_common.sh@834 -- # local max_retries=100
00:13:39.448   10:51:28	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:39.448  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:39.448   10:51:28	-- common/autotest_common.sh@838 -- # xtrace_disable
00:13:39.448   10:51:28	-- common/autotest_common.sh@10 -- # set +x
00:13:39.448  [2024-12-15 10:51:28.214319] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:13:39.448  [2024-12-15 10:51:28.214388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148869 ]
00:13:39.448  EAL: No free 2048 kB hugepages reported on node 1
00:13:39.448  [2024-12-15 10:51:28.309196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:39.448  [2024-12-15 10:51:28.413694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:13:39.448  [2024-12-15 10:51:28.413884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:13:39.448  [2024-12-15 10:51:28.413890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:13:39.707  [2024-12-15 10:51:28.611110] 'OCF_Core' volume operations registered
00:13:39.707  [2024-12-15 10:51:28.614600] 'OCF_Cache' volume operations registered
00:13:39.707  [2024-12-15 10:51:28.618543] 'OCF Composite' volume operations registered
00:13:39.707  [2024-12-15 10:51:28.622044] 'SPDK_block_device' volume operations registered
00:13:40.276   10:51:29	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:13:40.276   10:51:29	-- common/autotest_common.sh@862 -- # return 0
00:13:40.276   10:51:29	-- cuse/spdk_nvme_cli_cuse.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:13:43.567  Nvme0n1
00:13:43.567   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@70 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:13:43.827  [2024-12-15 10:51:32.760764] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:13:43.827  [2024-12-15 10:51:32.760930] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:13:43.827  [2024-12-15 10:51:32.761056] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:13:43.827   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@72 -- # ctrlr=/dev/spdk/nvme0
00:13:43.827   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@73 -- # ns=/dev/spdk/nvme0n1
00:13:43.827   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@74 -- # waitforfile /dev/spdk/nvme0n1
00:13:43.827   10:51:32	-- common/autotest_common.sh@1254 -- # local i=0
00:13:43.827   10:51:32	-- common/autotest_common.sh@1255 -- # '[' '!' -e /dev/spdk/nvme0n1 ']'
00:13:43.827   10:51:32	-- common/autotest_common.sh@1261 -- # '[' '!' -e /dev/spdk/nvme0n1 ']'
00:13:43.827   10:51:32	-- common/autotest_common.sh@1265 -- # return 0
00:13:43.827   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:13:44.086  [
00:13:44.086    {
00:13:44.086      "name": "Nvme0n1",
00:13:44.086      "aliases": [
00:13:44.086        "a6ba86c8-5e24-4f08-99df-93357a60448f"
00:13:44.086      ],
00:13:44.086      "product_name": "NVMe disk",
00:13:44.086      "block_size": 512,
00:13:44.086      "num_blocks": 7814037168,
00:13:44.086      "uuid": "a6ba86c8-5e24-4f08-99df-93357a60448f",
00:13:44.086      "assigned_rate_limits": {
00:13:44.086        "rw_ios_per_sec": 0,
00:13:44.086        "rw_mbytes_per_sec": 0,
00:13:44.086        "r_mbytes_per_sec": 0,
00:13:44.086        "w_mbytes_per_sec": 0
00:13:44.086      },
00:13:44.086      "claimed": false,
00:13:44.086      "zoned": false,
00:13:44.086      "supported_io_types": {
00:13:44.086        "read": true,
00:13:44.086        "write": true,
00:13:44.086        "unmap": true,
00:13:44.086        "write_zeroes": true,
00:13:44.086        "flush": true,
00:13:44.086        "reset": true,
00:13:44.086        "compare": false,
00:13:44.086        "compare_and_write": false,
00:13:44.086        "abort": true,
00:13:44.086        "nvme_admin": true,
00:13:44.086        "nvme_io": true
00:13:44.086      },
00:13:44.086      "driver_specific": {
00:13:44.086        "nvme": [
00:13:44.086          {
00:13:44.086            "pci_address": "0000:5e:00.0",
00:13:44.086            "trid": {
00:13:44.086              "trtype": "PCIe",
00:13:44.086              "traddr": "0000:5e:00.0"
00:13:44.086            },
00:13:44.086            "cuse_device": "spdk/nvme0n1",
00:13:44.086            "ctrlr_data": {
00:13:44.086              "cntlid": 0,
00:13:44.086              "vendor_id": "0x8086",
00:13:44.086              "model_number": "INTEL SSDPE2KX040T8",
00:13:44.086              "serial_number": "BTLJ83030AK84P0DGN",
00:13:44.086              "firmware_revision": "VDV10184",
00:13:44.086              "oacs": {
00:13:44.086                "security": 0,
00:13:44.086                "format": 1,
00:13:44.086                "firmware": 1,
00:13:44.086                "ns_manage": 1
00:13:44.086              },
00:13:44.086              "multi_ctrlr": false,
00:13:44.086              "ana_reporting": false
00:13:44.086            },
00:13:44.086            "vs": {
00:13:44.086              "nvme_version": "1.2"
00:13:44.086            },
00:13:44.086            "ns_data": {
00:13:44.086              "id": 1,
00:13:44.086              "can_share": false
00:13:44.086            }
00:13:44.086          }
00:13:44.086        ],
00:13:44.086        "mp_policy": "active_passive"
00:13:44.086      }
00:13:44.086    }
00:13:44.086  ]
00:13:44.086   10:51:32	-- cuse/spdk_nvme_cli_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:13:44.346  [
00:13:44.346    {
00:13:44.346      "name": "Nvme0",
00:13:44.346      "ctrlrs": [
00:13:44.346        {
00:13:44.346          "state": "enabled",
00:13:44.346          "cuse_device": "spdk/nvme0",
00:13:44.346          "trid": {
00:13:44.346            "trtype": "PCIe",
00:13:44.346            "traddr": "0000:5e:00.0"
00:13:44.346          },
00:13:44.346          "cntlid": 0,
00:13:44.346          "host": {
00:13:44.346            "nqn": "nqn.2014-08.org.nvmexpress:uuid:c414aded-9b29-400d-86b7-69c447a4b012",
00:13:44.346            "addr": "",
00:13:44.346            "svcid": ""
00:13:44.346          }
00:13:44.346        }
00:13:44.346      ]
00:13:44.346    }
00:13:44.346  ]
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@79 -- # /usr/local/src/nvme-cli/nvme get-ns-id /dev/spdk/nvme0n1
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@80 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/spdk/nvme0n1
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@81 -- # /usr/local/src/nvme-cli/nvme list-ns /dev/spdk/nvme0n1
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@83 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/spdk/nvme0
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@84 -- # /usr/local/src/nvme-cli/nvme list-ctrl /dev/spdk/nvme0
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@85 -- # '[' 4 -ne 0 ']'
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@86 -- # /usr/local/src/nvme-cli/nvme fw-log /dev/spdk/nvme0
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@88 -- # /usr/local/src/nvme-cli/nvme smart-log /dev/spdk/nvme0
00:13:44.346  Smart Log for NVME device:nvme0 namespace-id:ffffffff
00:13:44.346  critical_warning			: 0
00:13:44.346  temperature				: 37 °C (310 K)
00:13:44.346  available_spare				: 99%
00:13:44.346  available_spare_threshold		: 10%
00:13:44.346  percentage_used				: 32%
00:13:44.346  endurance group critical warning summary: 0
00:13:44.346  Data Units Read				: 628,379,970 (321.73 TB)
00:13:44.346  Data Units Written			: 790,799,418 (404.89 TB)
00:13:44.346  host_read_commands			: 36,986,167,503
00:13:44.346  host_write_commands			: 42,949,937,724
00:13:44.346  controller_busy_time			: 3,917
00:13:44.346  power_cycles				: 31
00:13:44.346  power_on_hours				: 20,842
00:13:44.346  unsafe_shutdowns			: 46
00:13:44.346  media_errors				: 0
00:13:44.346  num_err_log_entries			: 38,669
00:13:44.346  Warning Temperature Time		: 2198
00:13:44.346  Critical Composite Temperature Time	: 0
00:13:44.346  Thermal Management T1 Trans Count	: 0
00:13:44.346  Thermal Management T2 Trans Count	: 0
00:13:44.346  Thermal Management T1 Total Time	: 0
00:13:44.346  Thermal Management T2 Total Time	: 0
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@89 -- # /usr/local/src/nvme-cli/nvme error-log /dev/spdk/nvme0
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@90 -- # /usr/local/src/nvme-cli/nvme get-feature /dev/spdk/nvme0 -f 1 -l 100
00:13:44.346  [2024-12-15 10:51:33.316280] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:13:44.346   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@91 -- # /usr/local/src/nvme-cli/nvme get-log /dev/spdk/nvme0 -i 1 -l 100
00:13:44.347   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@92 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:13:44.347  [2024-12-15 10:51:33.359397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:13:44.347   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@93 -- # /usr/local/src/nvme-cli/nvme set-feature /dev/spdk/nvme0 -n 1 -f 2 -v 0
00:13:44.607  [2024-12-15 10:51:33.379452] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES POWER MANAGEMENT cid:186 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:13:44.607  [2024-12-15 10:51:33.379481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: FEATURE NOT NAMESPACE SPECIFIC (01/0f) qid:0 cid:186 cdw0:0 sqhd:000d p:1 m:0 dnr:1
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@93 -- # true
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.1
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.2 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.2
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.3 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.3
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.4 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.4
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.5 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.5
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.6 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.6
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.7 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.7
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.8 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.8
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.9 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.9
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.10 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.10
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@95 -- # for i in {1..11}
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@96 -- # '[' -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11 ']'
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@97 -- # sed -i s/nvme0/nvme0/g /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@98 -- # diff --suppress-common-lines /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/kernel.out.11 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files/cuse.out.11
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@102 -- # rm -Rf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/match_files
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@105 -- # head -c512 /dev/urandom
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@106 -- # /usr/local/src/nvme-cli/nvme write /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file
00:13:44.607  write: Success
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme read /dev/spdk/nvme0n1 --data-size=512 --data=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:13:44.607  read: Success
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@108 -- # cmp /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@109 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/write_file /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/read_file
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@113 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 5 --cdw10=0x3ff0003 --cdw11=0x1 -r
00:13:44.607  Admin Command Create I/O Completion Queue is Success and result: 0x00000000
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@114 -- # /usr/local/src/nvme-cli/nvme admin-passthru /dev/spdk/nvme0 -o 4 --cdw10=0x3
00:13:44.607  Admin Command Delete I/O Completion Queue is Success and result: 0x00000000
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@116 -- # [[ -c /dev/spdk/nvme0 ]]
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@117 -- # [[ -c /dev/spdk/nvme0n1 ]]
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@119 -- # trap - SIGINT SIGTERM EXIT
00:13:44.607   10:51:33	-- cuse/spdk_nvme_cli_cuse.sh@120 -- # killprocess 2148869
00:13:44.607   10:51:33	-- common/autotest_common.sh@936 -- # '[' -z 2148869 ']'
00:13:44.607   10:51:33	-- common/autotest_common.sh@940 -- # kill -0 2148869
00:13:44.607    10:51:33	-- common/autotest_common.sh@941 -- # uname
00:13:44.607   10:51:33	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:13:44.607    10:51:33	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2148869
00:13:44.866   10:51:33	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:13:44.866   10:51:33	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:13:44.866   10:51:33	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2148869'
00:13:44.866  killing process with pid 2148869
00:13:44.866   10:51:33	-- common/autotest_common.sh@955 -- # kill 2148869
00:13:44.867   10:51:33	-- common/autotest_common.sh@960 -- # wait 2148869
00:13:50.142  
00:13:50.142  real	0m21.168s
00:13:50.142  user	0m21.868s
00:13:50.142  sys	0m5.743s
00:13:50.142   10:51:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:13:50.142   10:51:38	-- common/autotest_common.sh@10 -- # set +x
00:13:50.142  ************************************
00:13:50.142  END TEST nvme_cli_cuse
00:13:50.142  ************************************
00:13:50.142   10:51:38	-- cuse/nvme_cuse.sh@20 -- # run_test nvme_cli_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh
00:13:50.142   10:51:38	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:13:50.142   10:51:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:13:50.142   10:51:38	-- common/autotest_common.sh@10 -- # set +x
00:13:50.142  ************************************
00:13:50.142  START TEST nvme_cli_plugin
00:13:50.142  ************************************
00:13:50.142   10:51:38	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_nvme_cli_plugin.sh
00:13:50.142  * Looking for test storage...
00:13:50.142  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:13:50.142     10:51:38	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:13:50.142      10:51:38	-- common/autotest_common.sh@1690 -- # lcov --version
00:13:50.142      10:51:38	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:13:50.142     10:51:38	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:13:50.142     10:51:38	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:13:50.142     10:51:38	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:13:50.142     10:51:38	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:13:50.142     10:51:38	-- scripts/common.sh@335 -- # IFS=.-:
00:13:50.142     10:51:38	-- scripts/common.sh@335 -- # read -ra ver1
00:13:50.142     10:51:38	-- scripts/common.sh@336 -- # IFS=.-:
00:13:50.142     10:51:38	-- scripts/common.sh@336 -- # read -ra ver2
00:13:50.142     10:51:38	-- scripts/common.sh@337 -- # local 'op=<'
00:13:50.142     10:51:38	-- scripts/common.sh@339 -- # ver1_l=2
00:13:50.142     10:51:38	-- scripts/common.sh@340 -- # ver2_l=1
00:13:50.142     10:51:38	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:13:50.142     10:51:38	-- scripts/common.sh@343 -- # case "$op" in
00:13:50.142     10:51:38	-- scripts/common.sh@344 -- # : 1
00:13:50.142     10:51:38	-- scripts/common.sh@363 -- # (( v = 0 ))
00:13:50.142     10:51:38	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:50.142      10:51:38	-- scripts/common.sh@364 -- # decimal 1
00:13:50.142      10:51:38	-- scripts/common.sh@352 -- # local d=1
00:13:50.142      10:51:38	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:50.142      10:51:38	-- scripts/common.sh@354 -- # echo 1
00:13:50.142     10:51:38	-- scripts/common.sh@364 -- # ver1[v]=1
00:13:50.142      10:51:38	-- scripts/common.sh@365 -- # decimal 2
00:13:50.142      10:51:38	-- scripts/common.sh@352 -- # local d=2
00:13:50.142      10:51:38	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:50.142      10:51:38	-- scripts/common.sh@354 -- # echo 2
00:13:50.142     10:51:38	-- scripts/common.sh@365 -- # ver2[v]=2
00:13:50.142     10:51:38	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:13:50.142     10:51:38	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:13:50.142     10:51:38	-- scripts/common.sh@367 -- # return 0
00:13:50.142     10:51:38	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:50.142     10:51:38	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:13:50.142  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:50.142  		--rc genhtml_branch_coverage=1
00:13:50.142  		--rc genhtml_function_coverage=1
00:13:50.142  		--rc genhtml_legend=1
00:13:50.142  		--rc geninfo_all_blocks=1
00:13:50.142  		--rc geninfo_unexecuted_blocks=1
00:13:50.142  		
00:13:50.142  		'
00:13:50.142     10:51:38	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:13:50.142  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:50.142  		--rc genhtml_branch_coverage=1
00:13:50.142  		--rc genhtml_function_coverage=1
00:13:50.142  		--rc genhtml_legend=1
00:13:50.142  		--rc geninfo_all_blocks=1
00:13:50.142  		--rc geninfo_unexecuted_blocks=1
00:13:50.142  		
00:13:50.142  		'
00:13:50.142     10:51:38	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:13:50.142  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:50.142  		--rc genhtml_branch_coverage=1
00:13:50.142  		--rc genhtml_function_coverage=1
00:13:50.142  		--rc genhtml_legend=1
00:13:50.142  		--rc geninfo_all_blocks=1
00:13:50.142  		--rc geninfo_unexecuted_blocks=1
00:13:50.142  		
00:13:50.142  		'
00:13:50.142     10:51:38	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:13:50.142  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:50.142  		--rc genhtml_branch_coverage=1
00:13:50.142  		--rc genhtml_function_coverage=1
00:13:50.142  		--rc genhtml_legend=1
00:13:50.142  		--rc geninfo_all_blocks=1
00:13:50.142  		--rc geninfo_unexecuted_blocks=1
00:13:50.142  		
00:13:50.142  		'
00:13:50.142    10:51:38	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:50.142       10:51:38	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:13:50.142      10:51:38	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:13:50.142     10:51:38	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:13:50.142     10:51:38	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:13:50.142      10:51:38	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:50.142      10:51:38	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:50.142      10:51:38	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:50.142       10:51:38	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:50.142       10:51:38	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:50.142       10:51:38	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:50.142       10:51:38	-- paths/export.sh@5 -- # export PATH
00:13:50.142       10:51:38	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:50.142     10:51:38	-- nvme/functions.sh@10 -- # ctrls=()
00:13:50.142     10:51:38	-- nvme/functions.sh@10 -- # declare -A ctrls
00:13:50.142     10:51:38	-- nvme/functions.sh@11 -- # nvmes=()
00:13:50.142     10:51:38	-- nvme/functions.sh@11 -- # declare -A nvmes
00:13:50.142     10:51:38	-- nvme/functions.sh@12 -- # bdfs=()
00:13:50.142     10:51:38	-- nvme/functions.sh@12 -- # declare -A bdfs
00:13:50.142     10:51:38	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:13:50.142     10:51:38	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:13:50.142     10:51:38	-- nvme/functions.sh@14 -- # nvme_name=
00:13:50.142    10:51:38	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@11 -- # trap 'killprocess $spdk_tgt_pid; "$rootdir/scripts/setup.sh" reset' EXIT
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@28 -- # kernel_out=()
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@29 -- # cuse_out=()
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@31 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@36 -- # export PCI_BLOCKED=
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@36 -- # PCI_BLOCKED=
00:13:50.142   10:51:38	-- cuse/spdk_nvme_cli_plugin.sh@38 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:13:52.679  Waiting for block devices as requested
00:13:52.679  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:13:52.679  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:13:52.679  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:13:52.679  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:13:52.937  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:13:52.937  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:13:52.937  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:13:53.197  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:13:53.197  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:13:53.197  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:13:53.456  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:13:53.456  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:13:53.456  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:13:53.716  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:13:53.716  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:13:53.716  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:13:53.978  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:13:53.978   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@39 -- # scan_nvme_ctrls
00:13:53.978   10:51:42	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:13:53.978   10:51:42	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:13:53.978   10:51:42	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:13:53.978   10:51:42	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:13:53.978   10:51:42	-- scripts/common.sh@15 -- # local i
00:13:53.978   10:51:42	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:13:53.978   10:51:42	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:13:53.978   10:51:42	-- scripts/common.sh@24 -- # return 0
00:13:53.978   10:51:42	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:13:53.978   10:51:42	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:13:53.978   10:51:42	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@18 -- # shift
00:13:53.978   10:51:42	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978    10:51:42	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.978   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:13:53.978    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.978   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.978   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.979   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:13:53.979    10:51:42	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.979   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.979   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.980   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.980   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:13:53.980   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:13:53.980    10:51:42	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:13:53.981   10:51:42	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:13:53.981   10:51:42	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:13:53.981   10:51:42	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:13:53.981   10:51:42	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@18 -- # shift
00:13:53.981   10:51:42	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981    10:51:42	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:13:53.981    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.981   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.981   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:13:53.981   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:13:53.982    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.982   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  010000009f6e00000000000000000000 ]]
00:13:53.982   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000009f6e00000000000000000000"'
00:13:53.982    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000009f6e00000000000000000000
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.982   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  0000000000009f6e ]]
00:13:53.982   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000009f6e"'
00:13:53.982    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000009f6e
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.982   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:13:53.982   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:13:53.982    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.982   10:51:42	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:13:53.982   10:51:42	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:13:53.982    10:51:42	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # IFS=:
00:13:53.982   10:51:42	-- nvme/functions.sh@21 -- # read -r reg val
00:13:53.982   10:51:42	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:13:53.982   10:51:42	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:13:53.982   10:51:42	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:13:53.982   10:51:42	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:13:53.982   10:51:42	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:13:53.982   10:51:42	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@41 -- # nvme list
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:13:53.982   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@41 -- # kernel_out[0]='Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:13:53.982  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:13:53.982  nvme0n1          nvme0n1            BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@42 -- # nvme list -v
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:13:53.982   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@42 -- # kernel_out[1]='Subsystem        Subsystem-NQN                                                                                    Controllers
00:13:53.982  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:13:53.982  nvme0     nvme0
00:13:53.982  
00:13:53.982  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:13:53.982  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:13:53.982  nvme0    BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0 nvme0n1
00:13:53.982  
00:13:53.982  Device       Generic      NSID       Usage                      Format           Controllers     
00:13:53.982  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:13:53.982  nvme0n1 nvme0n1   0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@43 -- # nvme list -v -o json
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list -v -o json
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:13:53.982   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@43 -- # kernel_out[2]='{
00:13:53.982    "Devices":[
00:13:53.982      {
00:13:53.982        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:13:53.982        "Subsystems":[
00:13:53.982          {
00:13:53.982            "Subsystem":"nvme0",
00:13:53.982            
00:13:53.982            "Controllers":[
00:13:53.982              {
00:13:53.982                "Controller":"nvme0",
00:13:53.982                "SerialNumber":"BTLJ83030AK84P0DGN",
00:13:53.982                "ModelNumber":"INTEL SSDPE2KX040T8",
00:13:53.982                "Firmware":"VDV10184",
00:13:53.982                "Transport":"pcie",
00:13:53.982                "Address":"0000:5e:00.0",
00:13:53.982                "Namespaces":[
00:13:53.982                  {
00:13:53.982                    "NameSpace":"nvme0n1",
00:13:53.982                    "Generic":"nvme0n1",
00:13:53.982                    "NSID":1,
00:13:53.982                    "UsedBytes":4000787030016,
00:13:53.982                    "MaximumLBA":7814037168,
00:13:53.982                    "PhysicalSize":4000787030016,
00:13:53.982                    "SectorSize":512
00:13:53.982                  }
00:13:53.982                ],
00:13:53.982                "Paths":[
00:13:53.982                ]
00:13:53.982              }
00:13:53.982            ],
00:13:53.982            "Namespaces":[
00:13:53.982            ]
00:13:53.982          }
00:13:53.982        ]
00:13:53.982      }
00:13:53.982    ]
00:13:53.982  }'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@44 -- # nvme list-subsys
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme list-subsys
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:13:53.982    10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:13:53.982   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@44 -- # kernel_out[3]='nvme0 - 
00:13:53.982  \
00:13:53.982   +- nvme0 pcie 0000:5e:00.0 live'
00:13:53.982   10:51:42	-- cuse/spdk_nvme_cli_plugin.sh@46 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:13:57.274  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:13:57.274  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:00.566  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:14:00.566   10:51:49	-- cuse/spdk_nvme_cli_plugin.sh@49 -- # spdk_tgt_pid=2153059
00:14:00.566   10:51:49	-- cuse/spdk_nvme_cli_plugin.sh@48 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:14:00.566   10:51:49	-- cuse/spdk_nvme_cli_plugin.sh@51 -- # waitforlisten 2153059
00:14:00.566   10:51:49	-- common/autotest_common.sh@829 -- # '[' -z 2153059 ']'
00:14:00.566   10:51:49	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:00.566   10:51:49	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:00.566   10:51:49	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:00.566  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:00.566   10:51:49	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:00.566   10:51:49	-- common/autotest_common.sh@10 -- # set +x
00:14:00.566  [2024-12-15 10:51:49.211872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:00.566  [2024-12-15 10:51:49.211941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153059 ]
00:14:00.566  EAL: No free 2048 kB hugepages reported on node 1
00:14:00.566  [2024-12-15 10:51:49.320848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:00.566  [2024-12-15 10:51:49.425730] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:00.566  [2024-12-15 10:51:49.425885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:00.825  [2024-12-15 10:51:49.629609] 'OCF_Core' volume operations registered
00:14:00.826  [2024-12-15 10:51:49.633307] 'OCF_Cache' volume operations registered
00:14:00.826  [2024-12-15 10:51:49.637235] 'OCF Composite' volume operations registered
00:14:00.826  [2024-12-15 10:51:49.640674] 'SPDK_block_device' volume operations registered
00:14:01.393   10:51:50	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:01.393   10:51:50	-- common/autotest_common.sh@862 -- # return 0
00:14:01.393   10:51:50	-- cuse/spdk_nvme_cli_plugin.sh@54 -- # for ctrl in "${ordered_ctrls[@]}"
00:14:01.393   10:51:50	-- cuse/spdk_nvme_cli_plugin.sh@55 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:5e:00.0
00:14:04.684  nvme0n1
00:14:04.684   10:51:53	-- cuse/spdk_nvme_cli_plugin.sh@56 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n nvme0
00:14:04.944  [2024-12-15 10:51:53.772826] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:04.944  [2024-12-15 10:51:53.772989] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:04.944  [2024-12-15 10:51:53.773098] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:04.944   10:51:53	-- cuse/spdk_nvme_cli_plugin.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:14:05.203  [
00:14:05.203    {
00:14:05.203      "name": "nvme0n1",
00:14:05.203      "aliases": [
00:14:05.203        "9b8c1ea2-4896-4018-9411-a0a02c7991a9"
00:14:05.203      ],
00:14:05.203      "product_name": "NVMe disk",
00:14:05.203      "block_size": 512,
00:14:05.203      "num_blocks": 7814037168,
00:14:05.203      "uuid": "9b8c1ea2-4896-4018-9411-a0a02c7991a9",
00:14:05.203      "assigned_rate_limits": {
00:14:05.203        "rw_ios_per_sec": 0,
00:14:05.203        "rw_mbytes_per_sec": 0,
00:14:05.203        "r_mbytes_per_sec": 0,
00:14:05.203        "w_mbytes_per_sec": 0
00:14:05.203      },
00:14:05.203      "claimed": false,
00:14:05.203      "zoned": false,
00:14:05.203      "supported_io_types": {
00:14:05.203        "read": true,
00:14:05.203        "write": true,
00:14:05.203        "unmap": true,
00:14:05.203        "write_zeroes": true,
00:14:05.203        "flush": true,
00:14:05.203        "reset": true,
00:14:05.203        "compare": false,
00:14:05.203        "compare_and_write": false,
00:14:05.203        "abort": true,
00:14:05.203        "nvme_admin": true,
00:14:05.203        "nvme_io": true
00:14:05.203      },
00:14:05.203      "driver_specific": {
00:14:05.203        "nvme": [
00:14:05.203          {
00:14:05.203            "pci_address": "0000:5e:00.0",
00:14:05.203            "trid": {
00:14:05.203              "trtype": "PCIe",
00:14:05.203              "traddr": "0000:5e:00.0"
00:14:05.203            },
00:14:05.203            "cuse_device": "spdk/nvme0n1",
00:14:05.203            "ctrlr_data": {
00:14:05.203              "cntlid": 0,
00:14:05.203              "vendor_id": "0x8086",
00:14:05.203              "model_number": "INTEL SSDPE2KX040T8",
00:14:05.203              "serial_number": "BTLJ83030AK84P0DGN",
00:14:05.203              "firmware_revision": "VDV10184",
00:14:05.203              "oacs": {
00:14:05.203                "security": 0,
00:14:05.203                "format": 1,
00:14:05.203                "firmware": 1,
00:14:05.203                "ns_manage": 1
00:14:05.203              },
00:14:05.203              "multi_ctrlr": false,
00:14:05.203              "ana_reporting": false
00:14:05.203            },
00:14:05.203            "vs": {
00:14:05.203              "nvme_version": "1.2"
00:14:05.203            },
00:14:05.203            "ns_data": {
00:14:05.203              "id": 1,
00:14:05.203              "can_share": false
00:14:05.203            }
00:14:05.203          }
00:14:05.203        ],
00:14:05.203        "mp_policy": "active_passive"
00:14:05.204      }
00:14:05.204    }
00:14:05.204  ]
00:14:05.204   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@61 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_get_controllers
00:14:05.463  [
00:14:05.463    {
00:14:05.463      "name": "nvme0",
00:14:05.463      "ctrlrs": [
00:14:05.463        {
00:14:05.463          "state": "enabled",
00:14:05.463          "cuse_device": "spdk/nvme0",
00:14:05.463          "trid": {
00:14:05.463            "trtype": "PCIe",
00:14:05.463            "traddr": "0000:5e:00.0"
00:14:05.463          },
00:14:05.463          "cntlid": 0,
00:14:05.463          "host": {
00:14:05.463            "nqn": "nqn.2014-08.org.nvmexpress:uuid:c29a9f61-82aa-42f0-aee5-8f803281016c",
00:14:05.463            "addr": "",
00:14:05.463            "svcid": ""
00:14:05.463          }
00:14:05.463        }
00:14:05.463      ]
00:14:05.463    }
00:14:05.463  ]
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@63 -- # nvme spdk list
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:05.463   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@63 -- # cuse_out[0]='Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:14:05.463  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:14:05.463  nvme0n1     nvme0n1     BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@64 -- # nvme spdk list -v
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:05.463   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@64 -- # cuse_out[1]='Subsystem        Subsystem-NQN                                                                                    Controllers
00:14:05.463  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:14:05.463  nvme0                                                                                                             nvme0
00:14:05.463  
00:14:05.463  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:14:05.463  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:14:05.463  nvme0 BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0        nvme0n1
00:14:05.463  
00:14:05.463  Device       Generic      NSID       Usage                      Format           Controllers     
00:14:05.463  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:14:05.463  nvme0n1 nvme0n1 0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@65 -- # nvme spdk list -v -o json
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list -v -o json
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:05.463   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@65 -- # cuse_out[2]='{
00:14:05.463    "Devices":[
00:14:05.463      {
00:14:05.463        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:14:05.463        "Subsystems":[
00:14:05.463          {
00:14:05.463            "Subsystem":"nvme0",
00:14:05.463            
00:14:05.463            "Controllers":[
00:14:05.463              {
00:14:05.463                "Controller":"nvme0",
00:14:05.463                "SerialNumber":"BTLJ83030AK84P0DGN",
00:14:05.463                "ModelNumber":"INTEL SSDPE2KX040T8",
00:14:05.463                "Firmware":"VDV10184",
00:14:05.463                "Transport":"pcie",
00:14:05.463                "Address":"0000:5e:00.0",
00:14:05.463                "Namespaces":[
00:14:05.463                  {
00:14:05.463                    "NameSpace":"nvme0n1",
00:14:05.463                    "Generic":"nvme0n1",
00:14:05.463                    "NSID":1,
00:14:05.463                    "UsedBytes":4000787030016,
00:14:05.463                    "MaximumLBA":7814037168,
00:14:05.463                    "PhysicalSize":4000787030016,
00:14:05.463                    "SectorSize":512
00:14:05.463                  }
00:14:05.463                ],
00:14:05.463                "Paths":[
00:14:05.463                ]
00:14:05.463              }
00:14:05.463            ],
00:14:05.463            "Namespaces":[
00:14:05.463            ]
00:14:05.463          }
00:14:05.463        ]
00:14:05.463      }
00:14:05.463    ]
00:14:05.463  }'
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@66 -- # nvme spdk list-subsys
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys
00:14:05.463    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:05.722   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@66 -- # cuse_out[3]='nvme0 - 
00:14:05.722  \
00:14:05.722   +- nvme0 pcie 0000:5e:00.0 live'
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@69 -- # nvme spdk list-subsys -v -o json
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@17 -- # sed -e 's#nqn.\+ ##g' -e 's#"SubsystemNQN.*##g' -e 's#NQN=.*##g' -e 's#/dev\(/spdk\)\?/##g' -e s#ng#nvme##g -e s#-subsys##g -e s#PCIE#pcie#g -e 's#(null)#live#g'
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@15 -- # /usr/local/src/nvme-cli-plugin/nvme spdk list-subsys -v -o json
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # (( PIPESTATUS[0] == 0 ))
00:14:05.722     10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # trap - ERR
00:14:05.722     10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@25 -- # print_backtrace
00:14:05.722     10:51:54	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:14:05.722     10:51:54	-- common/autotest_common.sh@1142 -- # return 0
00:14:05.722   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@69 -- # [[ Json output format is not supported. == \J\s\o\n\ \o\u\t\p\u\t\ \f\o\r\m\a\t\ \i\s\ \n\o\t\ \s\u\p\p\o\r\t\e\d\. ]]
00:14:05.722   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # diff -ub /dev/fd/62 /dev/fd/61
00:14:05.722    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:14:05.722  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:14:05.722  nvme0n1          nvme0n1            BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184' 'Subsystem        Subsystem-NQN                                                                                    Controllers
00:14:05.722  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:14:05.722  nvme0     nvme0
00:14:05.722  
00:14:05.722  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:14:05.722  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:14:05.722  nvme0    BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0 nvme0n1
00:14:05.722  
00:14:05.722  Device       Generic      NSID       Usage                      Format           Controllers     
00:14:05.722  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:14:05.723  nvme0n1 nvme0n1   0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0' '{
00:14:05.723    "Devices":[
00:14:05.723      {
00:14:05.723        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:14:05.723        "Subsystems":[
00:14:05.723          {
00:14:05.723            "Subsystem":"nvme0",
00:14:05.723            
00:14:05.723            "Controllers":[
00:14:05.723              {
00:14:05.723                "Controller":"nvme0",
00:14:05.723                "SerialNumber":"BTLJ83030AK84P0DGN",
00:14:05.723                "ModelNumber":"INTEL SSDPE2KX040T8",
00:14:05.723                "Firmware":"VDV10184",
00:14:05.723                "Transport":"pcie",
00:14:05.723                "Address":"0000:5e:00.0",
00:14:05.723                "Namespaces":[
00:14:05.723                  {
00:14:05.723                    "NameSpace":"nvme0n1",
00:14:05.723                    "Generic":"nvme0n1",
00:14:05.723                    "NSID":1,
00:14:05.723                    "UsedBytes":4000787030016,
00:14:05.723                    "MaximumLBA":7814037168,
00:14:05.723                    "PhysicalSize":4000787030016,
00:14:05.723                    "SectorSize":512
00:14:05.723                  }
00:14:05.723                ],
00:14:05.723                "Paths":[
00:14:05.723                ]
00:14:05.723              }
00:14:05.723            ],
00:14:05.723            "Namespaces":[
00:14:05.723            ]
00:14:05.723          }
00:14:05.723        ]
00:14:05.723      }
00:14:05.723    ]
00:14:05.723  }' 'nvme0 - 
00:14:05.723  \
00:14:05.723   +- nvme0 pcie 0000:5e:00.0 live'
00:14:05.723    10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@71 -- # printf '%s\n' 'Node                  Generic               SN                   Model                                    Namespace  Usage                      Format           FW Rev  
00:14:05.723  --------------------- --------------------- -------------------- ---------------------------------------- ---------- -------------------------- ---------------- --------
00:14:05.723  nvme0n1     nvme0n1     BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      0x1          4.00  TB /   4.00  TB    512   B +  0 B   VDV10184' 'Subsystem        Subsystem-NQN                                                                                    Controllers
00:14:05.723  ---------------- ------------------------------------------------------------------------------------------------ ----------------
00:14:05.723  nvme0                                                                                                             nvme0
00:14:05.723  
00:14:05.723  Device   SN                   MN                                       FR       TxPort Address        Subsystem    Namespaces      
00:14:05.723  -------- -------------------- ---------------------------------------- -------- ------ -------------- ------------ ----------------
00:14:05.723  nvme0 BTLJ83030AK84P0DGN   INTEL SSDPE2KX040T8                      VDV10184 pcie   0000:5e:00.0   nvme0        nvme0n1
00:14:05.723  
00:14:05.723  Device       Generic      NSID       Usage                      Format           Controllers     
00:14:05.723  ------------ ------------ ---------- -------------------------- ---------------- ----------------
00:14:05.723  nvme0n1 nvme0n1 0x1          4.00  TB /   4.00  TB    512   B +  0 B   nvme0' '{
00:14:05.723    "Devices":[
00:14:05.723      {
00:14:05.723        "HostNQN":"nqn.2014-08.org.nvmexpress:uuid:00067ae0-6ec8-e711-906e-00163566263e",
00:14:05.723        "Subsystems":[
00:14:05.723          {
00:14:05.723            "Subsystem":"nvme0",
00:14:05.723            
00:14:05.723            "Controllers":[
00:14:05.723              {
00:14:05.723                "Controller":"nvme0",
00:14:05.723                "SerialNumber":"BTLJ83030AK84P0DGN",
00:14:05.723                "ModelNumber":"INTEL SSDPE2KX040T8",
00:14:05.723                "Firmware":"VDV10184",
00:14:05.723                "Transport":"pcie",
00:14:05.723                "Address":"0000:5e:00.0",
00:14:05.723                "Namespaces":[
00:14:05.723                  {
00:14:05.723                    "NameSpace":"nvme0n1",
00:14:05.723                    "Generic":"nvme0n1",
00:14:05.723                    "NSID":1,
00:14:05.723                    "UsedBytes":4000787030016,
00:14:05.723                    "MaximumLBA":7814037168,
00:14:05.723                    "PhysicalSize":4000787030016,
00:14:05.723                    "SectorSize":512
00:14:05.723                  }
00:14:05.723                ],
00:14:05.723                "Paths":[
00:14:05.723                ]
00:14:05.723              }
00:14:05.723            ],
00:14:05.723            "Namespaces":[
00:14:05.723            ]
00:14:05.723          }
00:14:05.723        ]
00:14:05.723      }
00:14:05.723    ]
00:14:05.723  }' 'nvme0 - 
00:14:05.723  \
00:14:05.723   +- nvme0 pcie 0000:5e:00.0 live'
00:14:05.723   10:51:54	-- cuse/spdk_nvme_cli_plugin.sh@1 -- # killprocess 2153059
00:14:05.723   10:51:54	-- common/autotest_common.sh@936 -- # '[' -z 2153059 ']'
00:14:05.723   10:51:54	-- common/autotest_common.sh@940 -- # kill -0 2153059
00:14:05.723    10:51:54	-- common/autotest_common.sh@941 -- # uname
00:14:05.723   10:51:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:05.723    10:51:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2153059
00:14:05.723   10:51:54	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:05.723   10:51:54	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:05.723   10:51:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2153059'
00:14:05.723  killing process with pid 2153059
00:14:05.723   10:51:54	-- common/autotest_common.sh@955 -- # kill 2153059
00:14:05.723   10:51:54	-- common/autotest_common.sh@960 -- # wait 2153059
00:14:10.995   10:51:59	-- cuse/spdk_nvme_cli_plugin.sh@1 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:14:13.532  Waiting for block devices as requested
00:14:13.791  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:14:13.791  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:14.050  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:14.050  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:14.050  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:14.309  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:14.309  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:14.309  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:14.568  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:14.568  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:14.568  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:14.828  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:14.828  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:14.828  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:15.088  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:15.088  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:15.088  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:15.088  
00:14:15.088  real	0m25.883s
00:14:15.088  user	0m13.580s
00:14:15.088  sys	0m8.027s
00:14:15.088   10:52:04	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:15.088   10:52:04	-- common/autotest_common.sh@10 -- # set +x
00:14:15.088  ************************************
00:14:15.088  END TEST nvme_cli_plugin
00:14:15.088  ************************************
00:14:15.348   10:52:04	-- cuse/nvme_cuse.sh@21 -- # run_test nvme_smartctl_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh
00:14:15.348   10:52:04	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:15.348   10:52:04	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:15.348   10:52:04	-- common/autotest_common.sh@10 -- # set +x
00:14:15.348  ************************************
00:14:15.348  START TEST nvme_smartctl_cuse
00:14:15.348  ************************************
00:14:15.348   10:52:04	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/spdk_smartctl_cuse.sh
00:14:15.348  * Looking for test storage...
00:14:15.348  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:14:15.348    10:52:04	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:15.348     10:52:04	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:15.348     10:52:04	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:15.348    10:52:04	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:15.348    10:52:04	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:15.348    10:52:04	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:15.348    10:52:04	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:15.348    10:52:04	-- scripts/common.sh@335 -- # IFS=.-:
00:14:15.348    10:52:04	-- scripts/common.sh@335 -- # read -ra ver1
00:14:15.348    10:52:04	-- scripts/common.sh@336 -- # IFS=.-:
00:14:15.348    10:52:04	-- scripts/common.sh@336 -- # read -ra ver2
00:14:15.348    10:52:04	-- scripts/common.sh@337 -- # local 'op=<'
00:14:15.348    10:52:04	-- scripts/common.sh@339 -- # ver1_l=2
00:14:15.348    10:52:04	-- scripts/common.sh@340 -- # ver2_l=1
00:14:15.348    10:52:04	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:15.348    10:52:04	-- scripts/common.sh@343 -- # case "$op" in
00:14:15.348    10:52:04	-- scripts/common.sh@344 -- # : 1
00:14:15.348    10:52:04	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:15.348    10:52:04	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:15.348     10:52:04	-- scripts/common.sh@364 -- # decimal 1
00:14:15.348     10:52:04	-- scripts/common.sh@352 -- # local d=1
00:14:15.348     10:52:04	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:15.348     10:52:04	-- scripts/common.sh@354 -- # echo 1
00:14:15.348    10:52:04	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:15.348     10:52:04	-- scripts/common.sh@365 -- # decimal 2
00:14:15.348     10:52:04	-- scripts/common.sh@352 -- # local d=2
00:14:15.348     10:52:04	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:15.348     10:52:04	-- scripts/common.sh@354 -- # echo 2
00:14:15.348    10:52:04	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:15.348    10:52:04	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:15.348    10:52:04	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:15.348    10:52:04	-- scripts/common.sh@367 -- # return 0
00:14:15.348    10:52:04	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:15.348    10:52:04	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:15.348  		--rc genhtml_branch_coverage=1
00:14:15.348  		--rc genhtml_function_coverage=1
00:14:15.348  		--rc genhtml_legend=1
00:14:15.348  		--rc geninfo_all_blocks=1
00:14:15.348  		--rc geninfo_unexecuted_blocks=1
00:14:15.348  		
00:14:15.348  		'
00:14:15.348    10:52:04	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:15.348  		--rc genhtml_branch_coverage=1
00:14:15.348  		--rc genhtml_function_coverage=1
00:14:15.348  		--rc genhtml_legend=1
00:14:15.348  		--rc geninfo_all_blocks=1
00:14:15.348  		--rc geninfo_unexecuted_blocks=1
00:14:15.348  		
00:14:15.348  		'
00:14:15.348    10:52:04	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:15.348  		--rc genhtml_branch_coverage=1
00:14:15.348  		--rc genhtml_function_coverage=1
00:14:15.348  		--rc genhtml_legend=1
00:14:15.348  		--rc geninfo_all_blocks=1
00:14:15.348  		--rc geninfo_unexecuted_blocks=1
00:14:15.348  		
00:14:15.348  		'
00:14:15.348    10:52:04	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:15.348  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:15.348  		--rc genhtml_branch_coverage=1
00:14:15.348  		--rc genhtml_function_coverage=1
00:14:15.348  		--rc genhtml_legend=1
00:14:15.348  		--rc geninfo_all_blocks=1
00:14:15.348  		--rc geninfo_unexecuted_blocks=1
00:14:15.348  		
00:14:15.348  		'
00:14:15.348   10:52:04	-- cuse/spdk_smartctl_cuse.sh@11 -- # SMARTCTL_CMD='smartctl -d nvme'
00:14:15.348   10:52:04	-- cuse/spdk_smartctl_cuse.sh@12 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:15.348   10:52:04	-- cuse/spdk_smartctl_cuse.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:14:18.669  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:14:18.669  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:14:22.064  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:14:22.064    10:52:10	-- cuse/spdk_smartctl_cuse.sh@16 -- # get_first_nvme_bdf
00:14:22.064    10:52:10	-- common/autotest_common.sh@1519 -- # bdfs=()
00:14:22.064    10:52:10	-- common/autotest_common.sh@1519 -- # local bdfs
00:14:22.064    10:52:10	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:14:22.064     10:52:10	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:14:22.064     10:52:10	-- common/autotest_common.sh@1508 -- # bdfs=()
00:14:22.064     10:52:10	-- common/autotest_common.sh@1508 -- # local bdfs
00:14:22.064     10:52:10	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:22.064      10:52:10	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:14:22.064      10:52:10	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:14:22.064     10:52:10	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:14:22.064     10:52:10	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:14:22.064    10:52:10	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:14:22.064   10:52:10	-- cuse/spdk_smartctl_cuse.sh@16 -- # bdf=0000:5e:00.0
00:14:22.064   10:52:10	-- cuse/spdk_smartctl_cuse.sh@18 -- # PCI_ALLOWED=0000:5e:00.0
00:14:22.064   10:52:10	-- cuse/spdk_smartctl_cuse.sh@18 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:14:25.357  0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0
00:14:25.358  0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1
00:14:25.358  0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2
00:14:25.358  0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3
00:14:25.358  0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4
00:14:25.358  0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5
00:14:25.358  0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6
00:14:25.358  0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7
00:14:25.358  0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0
00:14:25.358  0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1
00:14:25.358  0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2
00:14:25.358  0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3
00:14:25.358  0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4
00:14:25.358  0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5
00:14:25.358  0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6
00:14:25.358  0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7
00:14:25.358  Waiting for block devices as requested
00:14:25.358  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:14:25.358    10:52:14	-- cuse/spdk_smartctl_cuse.sh@19 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:14:25.358     10:52:14	-- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0
00:14:25.358     10:52:14	-- common/autotest_common.sh@1497 -- # grep 0000:5e:00.0/nvme/nvme
00:14:25.358    10:52:14	-- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:14:25.358    10:52:14	-- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:14:25.358     10:52:14	-- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:14:25.358    10:52:14	-- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0
00:14:25.358   10:52:14	-- cuse/spdk_smartctl_cuse.sh@19 -- # nvme_name=nvme0
00:14:25.358   10:52:14	-- cuse/spdk_smartctl_cuse.sh@20 -- # [[ -z nvme0 ]]
00:14:25.358    10:52:14	-- cuse/spdk_smartctl_cuse.sh@25 -- # grep -v /dev/nvme0
00:14:25.358    10:52:14	-- cuse/spdk_smartctl_cuse.sh@25 -- # sort
00:14:25.358    10:52:14	-- cuse/spdk_smartctl_cuse.sh@25 -- # smartctl -d nvme --json=g -a /dev/nvme0
00:14:25.358   10:52:14	-- cuse/spdk_smartctl_cuse.sh@25 -- # KERNEL_SMART_JSON='json = {};
00:14:25.358  json.device = {};
00:14:25.358  json.device.protocol = "NVMe";
00:14:25.358  json.device.type = "nvme";
00:14:25.358  json.firmware_version = "VDV10184";
00:14:25.358  json.json_format_version = [];
00:14:25.358  json.json_format_version[0] = 1;
00:14:25.358  json.json_format_version[1] = 0;
00:14:25.358  json.local_time = {};
00:14:25.358  json.local_time.asctime = "Sun Dec 15 10:52:14 2024 CET";
00:14:25.358  json.local_time.time_t = 1734256334;
00:14:25.358  json.model_name = "INTEL SSDPE2KX040T8";
00:14:25.358  json.nvme_controller_id = 0;
00:14:25.358  json.nvme_error_information_log = {};
00:14:25.358  json.nvme_error_information_log.read = 16;
00:14:25.358  json.nvme_error_information_log.size = 64;
00:14:25.358  json.nvme_error_information_log.table = [];
00:14:25.358  json.nvme_error_information_log.table[0] = {};
00:14:25.358  json.nvme_error_information_log.table[0].error_count = 38669;
00:14:25.358  json.nvme_error_information_log.table[0].lba = {};
00:14:25.358  json.nvme_error_information_log.table[0].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[0].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[0].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[1] = {};
00:14:25.358  json.nvme_error_information_log.table[10] = {};
00:14:25.358  json.nvme_error_information_log.table[10].error_count = 38659;
00:14:25.358  json.nvme_error_information_log.table[10].lba = {};
00:14:25.358  json.nvme_error_information_log.table[10].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[10].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[10].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[11] = {};
00:14:25.358  json.nvme_error_information_log.table[11].error_count = 38658;
00:14:25.358  json.nvme_error_information_log.table[11].lba = {};
00:14:25.358  json.nvme_error_information_log.table[11].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[11].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[11].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:14:25.358  json.nvme_error_information_log.table[12] = {};
00:14:25.358  json.nvme_error_information_log.table[12].error_count = 38657;
00:14:25.358  json.nvme_error_information_log.table[12].lba = {};
00:14:25.358  json.nvme_error_information_log.table[12].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[12].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[12].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[13] = {};
00:14:25.358  json.nvme_error_information_log.table[13].error_count = 38656;
00:14:25.358  json.nvme_error_information_log.table[13].lba = {};
00:14:25.358  json.nvme_error_information_log.table[13].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[13].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[13].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[14] = {};
00:14:25.358  json.nvme_error_information_log.table[14].error_count = 38655;
00:14:25.358  json.nvme_error_information_log.table[14].lba = {};
00:14:25.358  json.nvme_error_information_log.table[14].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[14].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[14].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:14:25.358  json.nvme_error_information_log.table[15] = {};
00:14:25.358  json.nvme_error_information_log.table[15].error_count = 38654;
00:14:25.358  json.nvme_error_information_log.table[15].lba = {};
00:14:25.358  json.nvme_error_information_log.table[15].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[15].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[15].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[1].error_count = 38668;
00:14:25.358  json.nvme_error_information_log.table[1].lba = {};
00:14:25.358  json.nvme_error_information_log.table[1].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[1].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[1].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[2] = {};
00:14:25.358  json.nvme_error_information_log.table[2].error_count = 38667;
00:14:25.358  json.nvme_error_information_log.table[2].lba = {};
00:14:25.358  json.nvme_error_information_log.table[2].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[2].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[2].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:14:25.358  json.nvme_error_information_log.table[3] = {};
00:14:25.358  json.nvme_error_information_log.table[3].error_count = 38666;
00:14:25.358  json.nvme_error_information_log.table[3].lba = {};
00:14:25.358  json.nvme_error_information_log.table[3].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[3].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[3].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[4] = {};
00:14:25.358  json.nvme_error_information_log.table[4].error_count = 38665;
00:14:25.358  json.nvme_error_information_log.table[4].lba = {};
00:14:25.358  json.nvme_error_information_log.table[4].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[4].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[4].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[5] = {};
00:14:25.358  json.nvme_error_information_log.table[5].error_count = 38664;
00:14:25.358  json.nvme_error_information_log.table[5].lba = {};
00:14:25.358  json.nvme_error_information_log.table[5].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[5].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[5].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:14:25.358  json.nvme_error_information_log.table[6] = {};
00:14:25.358  json.nvme_error_information_log.table[6].error_count = 38663;
00:14:25.358  json.nvme_error_information_log.table[6].lba = {};
00:14:25.358  json.nvme_error_information_log.table[6].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[6].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[6].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[7] = {};
00:14:25.358  json.nvme_error_information_log.table[7].error_count = 38662;
00:14:25.358  json.nvme_error_information_log.table[7].lba = {};
00:14:25.358  json.nvme_error_information_log.table[7].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[7].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[7].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.table[8] = {};
00:14:25.358  json.nvme_error_information_log.table[8].error_count = 38661;
00:14:25.358  json.nvme_error_information_log.table[8].lba = {};
00:14:25.358  json.nvme_error_information_log.table[8].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[8].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[8].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:14:25.358  json.nvme_error_information_log.table[9] = {};
00:14:25.358  json.nvme_error_information_log.table[9].error_count = 38660;
00:14:25.358  json.nvme_error_information_log.table[9].lba = {};
00:14:25.358  json.nvme_error_information_log.table[9].lba.value = 0;
00:14:25.358  json.nvme_error_information_log.table[9].phase_tag = false;
00:14:25.358  json.nvme_error_information_log.table[9].status_field = {};
00:14:25.358  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:14:25.358  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:14:25.358  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:14:25.358  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:14:25.358  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:14:25.358  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:14:25.358  json.nvme_error_information_log.unread = 48;
00:14:25.358  json.nvme_ieee_oui_identifier = 6083300;
00:14:25.358  json.nvme_number_of_namespaces = 128;
00:14:25.358  json.nvme_pci_vendor = {};
00:14:25.358  json.nvme_pci_vendor.id = 32902;
00:14:25.358  json.nvme_pci_vendor.subsystem_id = 32902;
00:14:25.358  json.nvme_smart_health_information_log = {};
00:14:25.358  json.nvme_smart_health_information_log.available_spare = 99;
00:14:25.358  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:14:25.358  json.nvme_smart_health_information_log.controller_busy_time = 3917;
00:14:25.358  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:14:25.358  json.nvme_smart_health_information_log.critical_warning = 0;
00:14:25.358  json.nvme_smart_health_information_log.data_units_read = 628379981;
00:14:25.358  json.nvme_smart_health_information_log.data_units_written = 790799418;
00:14:25.358  json.nvme_smart_health_information_log.host_reads = 36986167763;
00:14:25.358  json.nvme_smart_health_information_log.host_writes = 42949937725;
00:14:25.358  json.nvme_smart_health_information_log.media_errors = 0;
00:14:25.358  json.nvme_smart_health_information_log.num_err_log_entries = 38669;
00:14:25.358  json.nvme_smart_health_information_log.percentage_used = 32;
00:14:25.358  json.nvme_smart_health_information_log.power_cycles = 31;
00:14:25.358  json.nvme_smart_health_information_log.power_on_hours = 20842;
00:14:25.358  json.nvme_smart_health_information_log.temperature = 37;
00:14:25.358  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:14:25.358  json.nvme_smart_health_information_log.warning_temp_time = 2198;
00:14:25.358  json.nvme_total_capacity = 4000787030016;
00:14:25.358  json.nvme_unallocated_capacity = 0;
00:14:25.358  json.nvme_version = {};
00:14:25.358  json.nvme_version.string = "1.2";
00:14:25.358  json.nvme_version.value = 66048;
00:14:25.358  json.power_cycle_count = 31;
00:14:25.358  json.power_on_time = {};
00:14:25.358  json.power_on_time.hours = 20842;
00:14:25.358  json.serial_number = "BTLJ83030AK84P0DGN";
00:14:25.358  json.smartctl = {};
00:14:25.358  json.smartctl.argv = [];
00:14:25.358  json.smartctl.argv[0] = "smartctl";
00:14:25.358  json.smartctl.argv[1] = "-d";
00:14:25.358  json.smartctl.argv[2] = "nvme";
00:14:25.358  json.smartctl.argv[3] = "--json=g";
00:14:25.358  json.smartctl.argv[4] = "-a";
00:14:25.358  json.smartctl.build_info = "(local build)";
00:14:25.358  json.smartctl.exit_status = 0;
00:14:25.358  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:14:25.358  json.smartctl.pre_release = false;
00:14:25.358  json.smartctl.svn_revision = "5530";
00:14:25.358  json.smartctl.version = [];
00:14:25.358  json.smartctl.version[0] = 7;
00:14:25.358  json.smartctl.version[1] = 4;
00:14:25.358  json.smart_status = {};
00:14:25.358  json.smart_status.nvme = {};
00:14:25.358  json.smart_status.nvme.value = 0;
00:14:25.358  json.smart_status.passed = true;
00:14:25.358  json.smart_support = {};
00:14:25.358  json.smart_support.available = true;
00:14:25.358  json.smart_support.enabled = true;
00:14:25.358  json.temperature = {};
00:14:25.358  json.temperature.current = 37;'
00:14:25.358   10:52:14	-- cuse/spdk_smartctl_cuse.sh@27 -- # smartctl -d nvme -i /dev/nvme0n1
00:14:25.358  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:25.358  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:25.358  
00:14:25.358  === START OF INFORMATION SECTION ===
00:14:25.358  Model Number:                       INTEL SSDPE2KX040T8
00:14:25.358  Serial Number:                      BTLJ83030AK84P0DGN
00:14:25.358  Firmware Version:                   VDV10184
00:14:25.358  PCI Vendor/Subsystem ID:            0x8086
00:14:25.358  IEEE OUI Identifier:                0x5cd2e4
00:14:25.358  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:14:25.358  Unallocated NVM Capacity:           0
00:14:25.358  Controller ID:                      0
00:14:25.358  NVMe Version:                       1.2
00:14:25.358  Number of Namespaces:               128
00:14:25.358  Namespace 1 Size/Capacity:          4,000,787,030,016 [4.00 TB]
00:14:25.358  Namespace 1 Formatted LBA Size:     512
00:14:25.358  Namespace 1 IEEE EUI-64:            000000 0000009f6e
00:14:25.359  Local Time is:                      Sun Dec 15 10:52:14 2024 CET
00:14:25.359  
00:14:25.359    10:52:14	-- cuse/spdk_smartctl_cuse.sh@30 -- # smartctl -d nvme -l error /dev/nvme0
00:14:25.359   10:52:14	-- cuse/spdk_smartctl_cuse.sh@30 -- # KERNEL_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:25.359  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:25.359  
00:14:25.359  === START OF SMART DATA SECTION ===
00:14:25.359  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:14:25.359  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:14:25.359    0      38669     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    1      38668     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    2      38667     0       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    3      38666     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    4      38665     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    5      38664     0       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    6      38663     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    7      38662     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    8      38661     0       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359    9      38660     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   10      38659     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   11      38658     0       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   12      38657     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   13      38656     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   14      38655     0       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359   15      38654     2       -  0xc00c      -            0     -     -  Internal Error
00:14:25.359  ... (48 entries not read)'
00:14:25.359   10:52:14	-- cuse/spdk_smartctl_cuse.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:14:28.651  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:14:28.651  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:14:31.946  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:14:31.946   10:52:20	-- cuse/spdk_smartctl_cuse.sh@35 -- # spdk_tgt_pid=2159657
00:14:31.946   10:52:20	-- cuse/spdk_smartctl_cuse.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:14:31.946   10:52:20	-- cuse/spdk_smartctl_cuse.sh@36 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:14:31.946   10:52:20	-- cuse/spdk_smartctl_cuse.sh@38 -- # waitforlisten 2159657
00:14:31.946   10:52:20	-- common/autotest_common.sh@829 -- # '[' -z 2159657 ']'
00:14:31.946   10:52:20	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:31.946   10:52:20	-- common/autotest_common.sh@834 -- # local max_retries=100
00:14:31.946   10:52:20	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:31.946  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:31.946   10:52:20	-- common/autotest_common.sh@838 -- # xtrace_disable
00:14:31.946   10:52:20	-- common/autotest_common.sh@10 -- # set +x
00:14:31.946  [2024-12-15 10:52:20.606189] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:14:31.946  [2024-12-15 10:52:20.606256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2159657 ]
00:14:31.946  EAL: No free 2048 kB hugepages reported on node 1
00:14:31.946  [2024-12-15 10:52:20.713272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:31.946  [2024-12-15 10:52:20.820139] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:14:31.946  [2024-12-15 10:52:20.820324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:14:31.946  [2024-12-15 10:52:20.820329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:14:32.205  [2024-12-15 10:52:21.020931] 'OCF_Core' volume operations registered
00:14:32.205  [2024-12-15 10:52:21.024419] 'OCF_Cache' volume operations registered
00:14:32.205  [2024-12-15 10:52:21.028371] 'OCF Composite' volume operations registered
00:14:32.205  [2024-12-15 10:52:21.031870] 'SPDK_block_device' volume operations registered
00:14:32.775   10:52:21	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:14:32.775   10:52:21	-- common/autotest_common.sh@862 -- # return 0
00:14:32.775   10:52:21	-- cuse/spdk_smartctl_cuse.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:14:36.067  Nvme0n1
00:14:36.067   10:52:24	-- cuse/spdk_smartctl_cuse.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:14:36.067  [2024-12-15 10:52:24.920569] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:36.067  [2024-12-15 10:52:24.920725] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:36.067  [2024-12-15 10:52:24.920838] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:36.067   10:52:24	-- cuse/spdk_smartctl_cuse.sh@43 -- # sleep 5
00:14:41.345   10:52:29	-- cuse/spdk_smartctl_cuse.sh@45 -- # '[' '!' -c /dev/spdk/nvme0 ']'
00:14:41.345    10:52:29	-- cuse/spdk_smartctl_cuse.sh@49 -- # smartctl -d nvme --json=g -a /dev/spdk/nvme0
00:14:41.345    10:52:29	-- cuse/spdk_smartctl_cuse.sh@49 -- # grep -v /dev/spdk/nvme0
00:14:41.345    10:52:29	-- cuse/spdk_smartctl_cuse.sh@49 -- # sort
00:14:41.345  [2024-12-15 10:52:29.974503] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.345   10:52:30	-- cuse/spdk_smartctl_cuse.sh@49 -- # CUSE_SMART_JSON='json = {};
00:14:41.345  json.device = {};
00:14:41.345  json.device.protocol = "NVMe";
00:14:41.345  json.device.type = "nvme";
00:14:41.346  json.firmware_version = "VDV10184";
00:14:41.346  json.json_format_version = [];
00:14:41.346  json.json_format_version[0] = 1;
00:14:41.346  json.json_format_version[1] = 0;
00:14:41.346  json.local_time = {};
00:14:41.346  json.local_time.asctime = "Sun Dec 15 10:52:29 2024 CET";
00:14:41.346  json.local_time.time_t = 1734256349;
00:14:41.346  json.model_name = "INTEL SSDPE2KX040T8";
00:14:41.346  json.nvme_controller_id = 0;
00:14:41.346  json.nvme_error_information_log = {};
00:14:41.346  json.nvme_error_information_log.read = 16;
00:14:41.346  json.nvme_error_information_log.size = 64;
00:14:41.346  json.nvme_error_information_log.table = [];
00:14:41.346  json.nvme_error_information_log.table[0] = {};
00:14:41.346  json.nvme_error_information_log.table[0].error_count = 38669;
00:14:41.346  json.nvme_error_information_log.table[0].lba = {};
00:14:41.346  json.nvme_error_information_log.table[0].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[0].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[0].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[1] = {};
00:14:41.346  json.nvme_error_information_log.table[10] = {};
00:14:41.346  json.nvme_error_information_log.table[10].error_count = 38659;
00:14:41.346  json.nvme_error_information_log.table[10].lba = {};
00:14:41.346  json.nvme_error_information_log.table[10].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[10].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[10].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[11] = {};
00:14:41.346  json.nvme_error_information_log.table[11].error_count = 38658;
00:14:41.346  json.nvme_error_information_log.table[11].lba = {};
00:14:41.346  json.nvme_error_information_log.table[11].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[11].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[11].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:14:41.346  json.nvme_error_information_log.table[12] = {};
00:14:41.346  json.nvme_error_information_log.table[12].error_count = 38657;
00:14:41.346  json.nvme_error_information_log.table[12].lba = {};
00:14:41.346  json.nvme_error_information_log.table[12].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[12].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[12].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[13] = {};
00:14:41.346  json.nvme_error_information_log.table[13].error_count = 38656;
00:14:41.346  json.nvme_error_information_log.table[13].lba = {};
00:14:41.346  json.nvme_error_information_log.table[13].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[13].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[13].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[14] = {};
00:14:41.346  json.nvme_error_information_log.table[14].error_count = 38655;
00:14:41.346  json.nvme_error_information_log.table[14].lba = {};
00:14:41.346  json.nvme_error_information_log.table[14].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[14].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[14].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:14:41.346  json.nvme_error_information_log.table[15] = {};
00:14:41.346  json.nvme_error_information_log.table[15].error_count = 38654;
00:14:41.346  json.nvme_error_information_log.table[15].lba = {};
00:14:41.346  json.nvme_error_information_log.table[15].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[15].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[15].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[1].error_count = 38668;
00:14:41.346  json.nvme_error_information_log.table[1].lba = {};
00:14:41.346  json.nvme_error_information_log.table[1].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[1].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[1].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[2] = {};
00:14:41.346  json.nvme_error_information_log.table[2].error_count = 38667;
00:14:41.346  json.nvme_error_information_log.table[2].lba = {};
00:14:41.346  json.nvme_error_information_log.table[2].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[2].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[2].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:14:41.346  json.nvme_error_information_log.table[3] = {};
00:14:41.346  json.nvme_error_information_log.table[3].error_count = 38666;
00:14:41.346  json.nvme_error_information_log.table[3].lba = {};
00:14:41.346  json.nvme_error_information_log.table[3].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[3].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[3].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[4] = {};
00:14:41.346  json.nvme_error_information_log.table[4].error_count = 38665;
00:14:41.346  json.nvme_error_information_log.table[4].lba = {};
00:14:41.346  json.nvme_error_information_log.table[4].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[4].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[4].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:14:41.346  json.nvme_error_information_log.table[5] = {};
00:14:41.346  json.nvme_error_information_log.table[5].error_count = 38664;
00:14:41.346  json.nvme_error_information_log.table[5].lba = {};
00:14:41.346  json.nvme_error_information_log.table[5].lba.value = 0;
00:14:41.346  json.nvme_error_information_log.table[5].phase_tag = false;
00:14:41.346  json.nvme_error_information_log.table[5].status_field = {};
00:14:41.346  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:14:41.346  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:14:41.346  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:14:41.346  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:14:41.346  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:14:41.346  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:14:41.346  json.nvme_error_information_log.table[6] = {};
00:14:41.346  json.nvme_error_information_log.table[6].error_count = 38663;
00:14:41.346  json.nvme_error_information_log.table[6].lba = {};
00:14:41.346  json.nvme_error_information_log.table[6].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[6].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[6].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.table[7] = {};
00:14:41.347  json.nvme_error_information_log.table[7].error_count = 38662;
00:14:41.347  json.nvme_error_information_log.table[7].lba = {};
00:14:41.347  json.nvme_error_information_log.table[7].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[7].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[7].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.table[8] = {};
00:14:41.347  json.nvme_error_information_log.table[8].error_count = 38661;
00:14:41.347  json.nvme_error_information_log.table[8].lba = {};
00:14:41.347  json.nvme_error_information_log.table[8].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[8].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[8].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:14:41.347  json.nvme_error_information_log.table[9] = {};
00:14:41.347  json.nvme_error_information_log.table[9].error_count = 38660;
00:14:41.347  json.nvme_error_information_log.table[9].lba = {};
00:14:41.347  json.nvme_error_information_log.table[9].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[9].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[9].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.unread = 48;
00:14:41.347  json.nvme_ieee_oui_identifier = 6083300;
00:14:41.347  json.nvme_number_of_namespaces = 128;
00:14:41.347  json.nvme_pci_vendor = {};
00:14:41.347  json.nvme_pci_vendor.id = 32902;
00:14:41.347  json.nvme_pci_vendor.subsystem_id = 32902;
00:14:41.347  json.nvme_smart_health_information_log = {};
00:14:41.347  json.nvme_smart_health_information_log.available_spare = 99;
00:14:41.347  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:14:41.347  json.nvme_smart_health_information_log.controller_busy_time = 3917;
00:14:41.347  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:14:41.347  json.nvme_smart_health_information_log.critical_warning = 0;
00:14:41.347  json.nvme_smart_health_information_log.data_units_read = 628379983;
00:14:41.347  json.nvme_smart_health_information_log.data_units_written = 790799418;
00:14:41.347  json.nvme_smart_health_information_log.host_reads = 36986167818;
00:14:41.347  json.nvme_smart_health_information_log.host_writes = 42949937725;
00:14:41.347  json.nvme_smart_health_information_log.media_errors = 0;
00:14:41.347  json.nvme_smart_health_information_log.num_err_log_entries = 38669;
00:14:41.347  json.nvme_smart_health_information_log.percentage_used = 32;
00:14:41.347  json.nvme_smart_health_information_log.power_cycles = 31;
00:14:41.347  json.nvme_smart_health_information_log.power_on_hours = 20842;
00:14:41.347  json.nvme_smart_health_information_log.temperature = 37;
00:14:41.347  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:14:41.347  json.nvme_smart_health_information_log.warning_temp_time = 2198;
00:14:41.347  json.nvme_total_capacity = 4000787030016;
00:14:41.347  json.nvme_unallocated_capacity = 0;
00:14:41.347  json.nvme_version = {};
00:14:41.347  json.nvme_version.string = "1.2";
00:14:41.347  json.nvme_version.value = 66048;
00:14:41.347  json.power_cycle_count = 31;
00:14:41.347  json.power_on_time = {};
00:14:41.347  json.power_on_time.hours = 20842;
00:14:41.347  json.serial_number = "BTLJ83030AK84P0DGN";
00:14:41.347  json.smartctl = {};
00:14:41.347  json.smartctl.argv = [];
00:14:41.347  json.smartctl.argv[0] = "smartctl";
00:14:41.347  json.smartctl.argv[1] = "-d";
00:14:41.347  json.smartctl.argv[2] = "nvme";
00:14:41.347  json.smartctl.argv[3] = "--json=g";
00:14:41.347  json.smartctl.argv[4] = "-a";
00:14:41.347  json.smartctl.build_info = "(local build)";
00:14:41.347  json.smartctl.exit_status = 0;
00:14:41.347  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:14:41.347  json.smartctl.pre_release = false;
00:14:41.347  json.smartctl.svn_revision = "5530";
00:14:41.347  json.smartctl.version = [];
00:14:41.347  json.smartctl.version[0] = 7;
00:14:41.347  json.smartctl.version[1] = 4;
00:14:41.347  json.smart_status = {};
00:14:41.347  json.smart_status.nvme = {};
00:14:41.347  json.smart_status.nvme.value = 0;
00:14:41.347  json.smart_status.passed = true;
00:14:41.347  json.smart_support = {};
00:14:41.347  json.smart_support.available = true;
00:14:41.347  json.smart_support.enabled = true;
00:14:41.347  json.temperature = {};
00:14:41.347  json.temperature.current = 37;'
00:14:41.347    10:52:30	-- cuse/spdk_smartctl_cuse.sh@51 -- # diff '--changed-group-format=%<' --unchanged-group-format= /dev/fd/62 /dev/fd/61
00:14:41.347     10:52:30	-- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {};
00:14:41.347  json.device = {};
00:14:41.347  json.device.protocol = "NVMe";
00:14:41.347  json.device.type = "nvme";
00:14:41.347  json.firmware_version = "VDV10184";
00:14:41.347  json.json_format_version = [];
00:14:41.347  json.json_format_version[0] = 1;
00:14:41.347  json.json_format_version[1] = 0;
00:14:41.347  json.local_time = {};
00:14:41.347  json.local_time.asctime = "Sun Dec 15 10:52:14 2024 CET";
00:14:41.347  json.local_time.time_t = 1734256334;
00:14:41.347  json.model_name = "INTEL SSDPE2KX040T8";
00:14:41.347  json.nvme_controller_id = 0;
00:14:41.347  json.nvme_error_information_log = {};
00:14:41.347  json.nvme_error_information_log.read = 16;
00:14:41.347  json.nvme_error_information_log.size = 64;
00:14:41.347  json.nvme_error_information_log.table = [];
00:14:41.347  json.nvme_error_information_log.table[0] = {};
00:14:41.347  json.nvme_error_information_log.table[0].error_count = 38669;
00:14:41.347  json.nvme_error_information_log.table[0].lba = {};
00:14:41.347  json.nvme_error_information_log.table[0].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[0].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[0].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.table[1] = {};
00:14:41.347  json.nvme_error_information_log.table[10] = {};
00:14:41.347  json.nvme_error_information_log.table[10].error_count = 38659;
00:14:41.347  json.nvme_error_information_log.table[10].lba = {};
00:14:41.347  json.nvme_error_information_log.table[10].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[10].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[10].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.table[11] = {};
00:14:41.347  json.nvme_error_information_log.table[11].error_count = 38658;
00:14:41.347  json.nvme_error_information_log.table[11].lba = {};
00:14:41.347  json.nvme_error_information_log.table[11].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[11].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[11].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:14:41.347  json.nvme_error_information_log.table[12] = {};
00:14:41.347  json.nvme_error_information_log.table[12].error_count = 38657;
00:14:41.347  json.nvme_error_information_log.table[12].lba = {};
00:14:41.347  json.nvme_error_information_log.table[12].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[12].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[12].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:14:41.347  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:14:41.347  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:14:41.347  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:14:41.347  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:14:41.347  json.nvme_error_information_log.table[13] = {};
00:14:41.347  json.nvme_error_information_log.table[13].error_count = 38656;
00:14:41.347  json.nvme_error_information_log.table[13].lba = {};
00:14:41.347  json.nvme_error_information_log.table[13].lba.value = 0;
00:14:41.347  json.nvme_error_information_log.table[13].phase_tag = false;
00:14:41.347  json.nvme_error_information_log.table[13].status_field = {};
00:14:41.347  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:14:41.347  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[14] = {};
00:14:41.348  json.nvme_error_information_log.table[14].error_count = 38655;
00:14:41.348  json.nvme_error_information_log.table[14].lba = {};
00:14:41.348  json.nvme_error_information_log.table[14].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[14].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[14].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:14:41.348  json.nvme_error_information_log.table[15] = {};
00:14:41.348  json.nvme_error_information_log.table[15].error_count = 38654;
00:14:41.348  json.nvme_error_information_log.table[15].lba = {};
00:14:41.348  json.nvme_error_information_log.table[15].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[15].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[15].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[1].error_count = 38668;
00:14:41.348  json.nvme_error_information_log.table[1].lba = {};
00:14:41.348  json.nvme_error_information_log.table[1].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[1].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[1].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[2] = {};
00:14:41.348  json.nvme_error_information_log.table[2].error_count = 38667;
00:14:41.348  json.nvme_error_information_log.table[2].lba = {};
00:14:41.348  json.nvme_error_information_log.table[2].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[2].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[2].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:14:41.348  json.nvme_error_information_log.table[3] = {};
00:14:41.348  json.nvme_error_information_log.table[3].error_count = 38666;
00:14:41.348  json.nvme_error_information_log.table[3].lba = {};
00:14:41.348  json.nvme_error_information_log.table[3].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[3].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[3].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[4] = {};
00:14:41.348  json.nvme_error_information_log.table[4].error_count = 38665;
00:14:41.348  json.nvme_error_information_log.table[4].lba = {};
00:14:41.348  json.nvme_error_information_log.table[4].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[4].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[4].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[5] = {};
00:14:41.348  json.nvme_error_information_log.table[5].error_count = 38664;
00:14:41.348  json.nvme_error_information_log.table[5].lba = {};
00:14:41.348  json.nvme_error_information_log.table[5].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[5].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[5].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:14:41.348  json.nvme_error_information_log.table[6] = {};
00:14:41.348  json.nvme_error_information_log.table[6].error_count = 38663;
00:14:41.348  json.nvme_error_information_log.table[6].lba = {};
00:14:41.348  json.nvme_error_information_log.table[6].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[6].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[6].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[7] = {};
00:14:41.348  json.nvme_error_information_log.table[7].error_count = 38662;
00:14:41.348  json.nvme_error_information_log.table[7].lba = {};
00:14:41.348  json.nvme_error_information_log.table[7].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[7].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[7].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.table[8] = {};
00:14:41.348  json.nvme_error_information_log.table[8].error_count = 38661;
00:14:41.348  json.nvme_error_information_log.table[8].lba = {};
00:14:41.348  json.nvme_error_information_log.table[8].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[8].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[8].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:14:41.348  json.nvme_error_information_log.table[9] = {};
00:14:41.348  json.nvme_error_information_log.table[9].error_count = 38660;
00:14:41.348  json.nvme_error_information_log.table[9].lba = {};
00:14:41.348  json.nvme_error_information_log.table[9].lba.value = 0;
00:14:41.348  json.nvme_error_information_log.table[9].phase_tag = false;
00:14:41.348  json.nvme_error_information_log.table[9].status_field = {};
00:14:41.348  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:14:41.348  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:14:41.348  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:14:41.348  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:14:41.348  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:14:41.348  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:14:41.348  json.nvme_error_information_log.unread = 48;
00:14:41.348  json.nvme_ieee_oui_identifier = 6083300;
00:14:41.348  json.nvme_number_of_namespaces = 128;
00:14:41.348  json.nvme_pci_vendor = {};
00:14:41.348  json.nvme_pci_vendor.id = 32902;
00:14:41.348  json.nvme_pci_vendor.subsystem_id = 32902;
00:14:41.348  json.nvme_smart_health_information_log = {};
00:14:41.348  json.nvme_smart_health_information_log.available_spare = 99;
00:14:41.348  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:14:41.348  json.nvme_smart_health_information_log.controller_busy_time = 3917;
00:14:41.348  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:14:41.348  json.nvme_smart_health_information_log.critical_warning = 0;
00:14:41.349  json.nvme_smart_health_information_log.data_units_read = 628379981;
00:14:41.349  json.nvme_smart_health_information_log.data_units_written = 790799418;
00:14:41.349  json.nvme_smart_health_information_log.host_reads = 36986167763;
00:14:41.349  json.nvme_smart_health_information_log.host_writes = 42949937725;
00:14:41.349  json.nvme_smart_health_information_log.media_errors = 0;
00:14:41.349  json.nvme_smart_health_information_log.num_err_log_entries = 38669;
00:14:41.349  json.nvme_smart_health_information_log.percentage_used = 32;
00:14:41.349  json.nvme_smart_health_information_log.power_cycles = 31;
00:14:41.349  json.nvme_smart_health_information_log.power_on_hours = 20842;
00:14:41.349  json.nvme_smart_health_information_log.temperature = 37;
00:14:41.349  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:14:41.349  json.nvme_smart_health_information_log.warning_temp_time = 2198;
00:14:41.349  json.nvme_total_capacity = 4000787030016;
00:14:41.349  json.nvme_unallocated_capacity = 0;
00:14:41.349  json.nvme_version = {};
00:14:41.349  json.nvme_version.string = "1.2";
00:14:41.349  json.nvme_version.value = 66048;
00:14:41.349  json.power_cycle_count = 31;
00:14:41.349  json.power_on_time = {};
00:14:41.349  json.power_on_time.hours = 20842;
00:14:41.349  json.serial_number = "BTLJ83030AK84P0DGN";
00:14:41.349  json.smartctl = {};
00:14:41.349  json.smartctl.argv = [];
00:14:41.349  json.smartctl.argv[0] = "smartctl";
00:14:41.349  json.smartctl.argv[1] = "-d";
00:14:41.349  json.smartctl.argv[2] = "nvme";
00:14:41.349  json.smartctl.argv[3] = "--json=g";
00:14:41.349  json.smartctl.argv[4] = "-a";
00:14:41.349  json.smartctl.build_info = "(local build)";
00:14:41.349  json.smartctl.exit_status = 0;
00:14:41.349  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:14:41.349  json.smartctl.pre_release = false;
00:14:41.349  json.smartctl.svn_revision = "5530";
00:14:41.349  json.smartctl.version = [];
00:14:41.349  json.smartctl.version[0] = 7;
00:14:41.349  json.smartctl.version[1] = 4;
00:14:41.349  json.smart_status = {};
00:14:41.349  json.smart_status.nvme = {};
00:14:41.349  json.smart_status.nvme.value = 0;
00:14:41.349  json.smart_status.passed = true;
00:14:41.349  json.smart_support = {};
00:14:41.349  json.smart_support.available = true;
00:14:41.349  json.smart_support.enabled = true;
00:14:41.349  json.temperature = {};
00:14:41.349  json.temperature.current = 37;'
00:14:41.349     10:52:30	-- cuse/spdk_smartctl_cuse.sh@51 -- # echo 'json = {};
00:14:41.349  json.device = {};
00:14:41.349  json.device.protocol = "NVMe";
00:14:41.349  json.device.type = "nvme";
00:14:41.349  json.firmware_version = "VDV10184";
00:14:41.349  json.json_format_version = [];
00:14:41.349  json.json_format_version[0] = 1;
00:14:41.349  json.json_format_version[1] = 0;
00:14:41.349  json.local_time = {};
00:14:41.349  json.local_time.asctime = "Sun Dec 15 10:52:29 2024 CET";
00:14:41.349  json.local_time.time_t = 1734256349;
00:14:41.349  json.model_name = "INTEL SSDPE2KX040T8";
00:14:41.349  json.nvme_controller_id = 0;
00:14:41.349  json.nvme_error_information_log = {};
00:14:41.349  json.nvme_error_information_log.read = 16;
00:14:41.349  json.nvme_error_information_log.size = 64;
00:14:41.349  json.nvme_error_information_log.table = [];
00:14:41.349  json.nvme_error_information_log.table[0] = {};
00:14:41.349  json.nvme_error_information_log.table[0].error_count = 38669;
00:14:41.349  json.nvme_error_information_log.table[0].lba = {};
00:14:41.349  json.nvme_error_information_log.table[0].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[0].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[0].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[0].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[0].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[0].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[0].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[0].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[0].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[1] = {};
00:14:41.349  json.nvme_error_information_log.table[10] = {};
00:14:41.349  json.nvme_error_information_log.table[10].error_count = 38659;
00:14:41.349  json.nvme_error_information_log.table[10].lba = {};
00:14:41.349  json.nvme_error_information_log.table[10].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[10].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[10].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[10].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[10].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[10].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[10].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[10].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[10].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[11] = {};
00:14:41.349  json.nvme_error_information_log.table[11].error_count = 38658;
00:14:41.349  json.nvme_error_information_log.table[11].lba = {};
00:14:41.349  json.nvme_error_information_log.table[11].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[11].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[11].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[11].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[11].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[11].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[11].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[11].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[11].submission_queue_id = 0;
00:14:41.349  json.nvme_error_information_log.table[12] = {};
00:14:41.349  json.nvme_error_information_log.table[12].error_count = 38657;
00:14:41.349  json.nvme_error_information_log.table[12].lba = {};
00:14:41.349  json.nvme_error_information_log.table[12].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[12].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[12].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[12].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[12].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[12].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[12].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[12].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[12].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[13] = {};
00:14:41.349  json.nvme_error_information_log.table[13].error_count = 38656;
00:14:41.349  json.nvme_error_information_log.table[13].lba = {};
00:14:41.349  json.nvme_error_information_log.table[13].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[13].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[13].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[13].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[13].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[13].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[13].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[13].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[13].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[14] = {};
00:14:41.349  json.nvme_error_information_log.table[14].error_count = 38655;
00:14:41.349  json.nvme_error_information_log.table[14].lba = {};
00:14:41.349  json.nvme_error_information_log.table[14].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[14].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[14].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[14].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[14].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[14].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[14].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[14].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[14].submission_queue_id = 0;
00:14:41.349  json.nvme_error_information_log.table[15] = {};
00:14:41.349  json.nvme_error_information_log.table[15].error_count = 38654;
00:14:41.349  json.nvme_error_information_log.table[15].lba = {};
00:14:41.349  json.nvme_error_information_log.table[15].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[15].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[15].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[15].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[15].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[15].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[15].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[15].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[15].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[1].error_count = 38668;
00:14:41.349  json.nvme_error_information_log.table[1].lba = {};
00:14:41.349  json.nvme_error_information_log.table[1].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[1].phase_tag = false;
00:14:41.349  json.nvme_error_information_log.table[1].status_field = {};
00:14:41.349  json.nvme_error_information_log.table[1].status_field.do_not_retry = true;
00:14:41.349  json.nvme_error_information_log.table[1].status_field.status_code = 6;
00:14:41.349  json.nvme_error_information_log.table[1].status_field.status_code_type = 0;
00:14:41.349  json.nvme_error_information_log.table[1].status_field.string = "Internal Error";
00:14:41.349  json.nvme_error_information_log.table[1].status_field.value = 24582;
00:14:41.349  json.nvme_error_information_log.table[1].submission_queue_id = 2;
00:14:41.349  json.nvme_error_information_log.table[2] = {};
00:14:41.349  json.nvme_error_information_log.table[2].error_count = 38667;
00:14:41.349  json.nvme_error_information_log.table[2].lba = {};
00:14:41.349  json.nvme_error_information_log.table[2].lba.value = 0;
00:14:41.349  json.nvme_error_information_log.table[2].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[2].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[2].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[2].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[2].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[2].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[2].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[2].submission_queue_id = 0;
00:14:41.350  json.nvme_error_information_log.table[3] = {};
00:14:41.350  json.nvme_error_information_log.table[3].error_count = 38666;
00:14:41.350  json.nvme_error_information_log.table[3].lba = {};
00:14:41.350  json.nvme_error_information_log.table[3].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[3].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[3].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[3].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[3].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[3].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[3].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[3].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[3].submission_queue_id = 2;
00:14:41.350  json.nvme_error_information_log.table[4] = {};
00:14:41.350  json.nvme_error_information_log.table[4].error_count = 38665;
00:14:41.350  json.nvme_error_information_log.table[4].lba = {};
00:14:41.350  json.nvme_error_information_log.table[4].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[4].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[4].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[4].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[4].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[4].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[4].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[4].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[4].submission_queue_id = 2;
00:14:41.350  json.nvme_error_information_log.table[5] = {};
00:14:41.350  json.nvme_error_information_log.table[5].error_count = 38664;
00:14:41.350  json.nvme_error_information_log.table[5].lba = {};
00:14:41.350  json.nvme_error_information_log.table[5].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[5].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[5].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[5].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[5].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[5].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[5].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[5].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[5].submission_queue_id = 0;
00:14:41.350  json.nvme_error_information_log.table[6] = {};
00:14:41.350  json.nvme_error_information_log.table[6].error_count = 38663;
00:14:41.350  json.nvme_error_information_log.table[6].lba = {};
00:14:41.350  json.nvme_error_information_log.table[6].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[6].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[6].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[6].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[6].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[6].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[6].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[6].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[6].submission_queue_id = 2;
00:14:41.350  json.nvme_error_information_log.table[7] = {};
00:14:41.350  json.nvme_error_information_log.table[7].error_count = 38662;
00:14:41.350  json.nvme_error_information_log.table[7].lba = {};
00:14:41.350  json.nvme_error_information_log.table[7].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[7].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[7].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[7].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[7].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[7].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[7].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[7].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[7].submission_queue_id = 2;
00:14:41.350  json.nvme_error_information_log.table[8] = {};
00:14:41.350  json.nvme_error_information_log.table[8].error_count = 38661;
00:14:41.350  json.nvme_error_information_log.table[8].lba = {};
00:14:41.350  json.nvme_error_information_log.table[8].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[8].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[8].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[8].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[8].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[8].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[8].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[8].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[8].submission_queue_id = 0;
00:14:41.350  json.nvme_error_information_log.table[9] = {};
00:14:41.350  json.nvme_error_information_log.table[9].error_count = 38660;
00:14:41.350  json.nvme_error_information_log.table[9].lba = {};
00:14:41.350  json.nvme_error_information_log.table[9].lba.value = 0;
00:14:41.350  json.nvme_error_information_log.table[9].phase_tag = false;
00:14:41.350  json.nvme_error_information_log.table[9].status_field = {};
00:14:41.350  json.nvme_error_information_log.table[9].status_field.do_not_retry = true;
00:14:41.350  json.nvme_error_information_log.table[9].status_field.status_code = 6;
00:14:41.350  json.nvme_error_information_log.table[9].status_field.status_code_type = 0;
00:14:41.350  json.nvme_error_information_log.table[9].status_field.string = "Internal Error";
00:14:41.350  json.nvme_error_information_log.table[9].status_field.value = 24582;
00:14:41.350  json.nvme_error_information_log.table[9].submission_queue_id = 2;
00:14:41.350  json.nvme_error_information_log.unread = 48;
00:14:41.350  json.nvme_ieee_oui_identifier = 6083300;
00:14:41.350  json.nvme_number_of_namespaces = 128;
00:14:41.350  json.nvme_pci_vendor = {};
00:14:41.350  json.nvme_pci_vendor.id = 32902;
00:14:41.350  json.nvme_pci_vendor.subsystem_id = 32902;
00:14:41.350  json.nvme_smart_health_information_log = {};
00:14:41.350  json.nvme_smart_health_information_log.available_spare = 99;
00:14:41.350  json.nvme_smart_health_information_log.available_spare_threshold = 10;
00:14:41.350  json.nvme_smart_health_information_log.controller_busy_time = 3917;
00:14:41.350  json.nvme_smart_health_information_log.critical_comp_time = 0;
00:14:41.350  json.nvme_smart_health_information_log.critical_warning = 0;
00:14:41.350  json.nvme_smart_health_information_log.data_units_read = 628379983;
00:14:41.350  json.nvme_smart_health_information_log.data_units_written = 790799418;
00:14:41.350  json.nvme_smart_health_information_log.host_reads = 36986167818;
00:14:41.350  json.nvme_smart_health_information_log.host_writes = 42949937725;
00:14:41.350  json.nvme_smart_health_information_log.media_errors = 0;
00:14:41.350  json.nvme_smart_health_information_log.num_err_log_entries = 38669;
00:14:41.350  json.nvme_smart_health_information_log.percentage_used = 32;
00:14:41.350  json.nvme_smart_health_information_log.power_cycles = 31;
00:14:41.350  json.nvme_smart_health_information_log.power_on_hours = 20842;
00:14:41.350  json.nvme_smart_health_information_log.temperature = 37;
00:14:41.350  json.nvme_smart_health_information_log.unsafe_shutdowns = 46;
00:14:41.350  json.nvme_smart_health_information_log.warning_temp_time = 2198;
00:14:41.350  json.nvme_total_capacity = 4000787030016;
00:14:41.350  json.nvme_unallocated_capacity = 0;
00:14:41.350  json.nvme_version = {};
00:14:41.350  json.nvme_version.string = "1.2";
00:14:41.350  json.nvme_version.value = 66048;
00:14:41.350  json.power_cycle_count = 31;
00:14:41.350  json.power_on_time = {};
00:14:41.350  json.power_on_time.hours = 20842;
00:14:41.350  json.serial_number = "BTLJ83030AK84P0DGN";
00:14:41.350  json.smartctl = {};
00:14:41.350  json.smartctl.argv = [];
00:14:41.350  json.smartctl.argv[0] = "smartctl";
00:14:41.350  json.smartctl.argv[1] = "-d";
00:14:41.350  json.smartctl.argv[2] = "nvme";
00:14:41.350  json.smartctl.argv[3] = "--json=g";
00:14:41.350  json.smartctl.argv[4] = "-a";
00:14:41.350  json.smartctl.build_info = "(local build)";
00:14:41.350  json.smartctl.exit_status = 0;
00:14:41.350  json.smartctl.platform_info = "x86_64-linux-6.8.9-200.fc39.x86_64";
00:14:41.350  json.smartctl.pre_release = false;
00:14:41.350  json.smartctl.svn_revision = "5530";
00:14:41.350  json.smartctl.version = [];
00:14:41.350  json.smartctl.version[0] = 7;
00:14:41.350  json.smartctl.version[1] = 4;
00:14:41.350  json.smart_status = {};
00:14:41.350  json.smart_status.nvme = {};
00:14:41.350  json.smart_status.nvme.value = 0;
00:14:41.350  json.smart_status.passed = true;
00:14:41.350  json.smart_support = {};
00:14:41.350  json.smart_support.available = true;
00:14:41.350  json.smart_support.enabled = true;
00:14:41.350  json.temperature = {};
00:14:41.350  json.temperature.current = 37;'
00:14:41.350    10:52:30	-- cuse/spdk_smartctl_cuse.sh@51 -- # true
00:14:41.350   10:52:30	-- cuse/spdk_smartctl_cuse.sh@51 -- # DIFF_SMART_JSON='json.local_time.asctime = "Sun Dec 15 10:52:14 2024 CET";
00:14:41.350  json.local_time.time_t = 1734256334;
00:14:41.350  json.nvme_smart_health_information_log.data_units_read = 628379981;
00:14:41.350  json.nvme_smart_health_information_log.host_reads = 36986167763;'
00:14:41.350    10:52:30	-- cuse/spdk_smartctl_cuse.sh@54 -- # grep -v 'json\.nvme_smart_health_information_log\.\|json\.local_time\.\|json\.temperature\.\|json\.power_on_time\.hours'
00:14:41.350    10:52:30	-- cuse/spdk_smartctl_cuse.sh@54 -- # true
00:14:41.350   10:52:30	-- cuse/spdk_smartctl_cuse.sh@54 -- # ERR_SMART_JSON=
00:14:41.350   10:52:30	-- cuse/spdk_smartctl_cuse.sh@56 -- # '[' -n '' ']'
00:14:41.350    10:52:30	-- cuse/spdk_smartctl_cuse.sh@61 -- # smartctl -d nvme -l error /dev/spdk/nvme0
00:14:41.350  [2024-12-15 10:52:30.086976] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.350   10:52:30	-- cuse/spdk_smartctl_cuse.sh@61 -- # CUSE_SMART_ERRLOG='smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.350  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.350  
00:14:41.350  === START OF SMART DATA SECTION ===
00:14:41.350  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:14:41.350  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:14:41.350    0      38669     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.350    1      38668     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.350    2      38667     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.350    3      38666     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.350    4      38665     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    5      38664     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    6      38663     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    7      38662     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    8      38661     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    9      38660     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   10      38659     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   11      38658     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   12      38657     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   13      38656     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   14      38655     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   15      38654     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351  ... (48 entries not read)'
00:14:41.351   10:52:30	-- cuse/spdk_smartctl_cuse.sh@62 -- # '[' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  === START OF SMART DATA SECTION ===
00:14:41.351  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:14:41.351  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:14:41.351    0      38669     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    1      38668     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    2      38667     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    3      38666     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    4      38665     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    5      38664     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    6      38663     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    7      38662     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    8      38661     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    9      38660     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   10      38659     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   11      38658     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   12      38657     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   13      38656     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   14      38655     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   15      38654     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351  ... (48 entries not read)' '!=' 'smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  === START OF SMART DATA SECTION ===
00:14:41.351  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:14:41.351  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:14:41.351    0      38669     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    1      38668     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    2      38667     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    3      38666     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    4      38665     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    5      38664     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    6      38663     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    7      38662     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    8      38661     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351    9      38660     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   10      38659     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   11      38658     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   12      38657     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   13      38656     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   14      38655     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351   15      38654     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.351  ... (48 entries not read)' ']'
00:14:41.351   10:52:30	-- cuse/spdk_smartctl_cuse.sh@68 -- # smartctl -d nvme -i /dev/spdk/nvme0n1
00:14:41.351  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  === START OF INFORMATION SECTION ===
00:14:41.351  Model Number:                       INTEL SSDPE2KX040T8
00:14:41.351  Serial Number:                      BTLJ83030AK84P0DGN
00:14:41.351  Firmware Version:                   VDV10184
00:14:41.351  PCI Vendor/Subsystem ID:            0x8086
00:14:41.351  IEEE OUI Identifier:                0x5cd2e4
00:14:41.351  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:14:41.351  Unallocated NVM Capacity:           0
00:14:41.351  Controller ID:                      0
00:14:41.351  NVMe Version:                       1.2
00:14:41.351  Number of Namespaces:               128
00:14:41.351  Namespace 1 Size/Capacity:          4,000,787,030,016 [4.00 TB]
00:14:41.351  Namespace 1 Formatted LBA Size:     512
00:14:41.351  Namespace 1 IEEE EUI-64:            000000 0000009f6e
00:14:41.351  Local Time is:                      Sun Dec 15 10:52:30 2024 CET
00:14:41.351  
00:14:41.351   10:52:30	-- cuse/spdk_smartctl_cuse.sh@69 -- # smartctl -d nvme -c /dev/spdk/nvme0
00:14:41.351  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  [2024-12-15 10:52:30.228191] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.351  === START OF INFORMATION SECTION ===
00:14:41.351  Firmware Updates (0x18):            4 Slots, no Reset required
00:14:41.351  Optional Admin Commands (0x000e):   Format Frmw_DL NS_Mngmt
00:14:41.351  Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
00:14:41.351  Log Page Attributes (0x0e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
00:14:41.351  Maximum Data Transfer Size:         32 Pages
00:14:41.351  Warning  Comp. Temp. Threshold:     70 Celsius
00:14:41.351  Critical Comp. Temp. Threshold:     80 Celsius
00:14:41.351  
00:14:41.351  Supported Power States
00:14:41.351  St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
00:14:41.351   0 +    20.00W       -        -    0  0  0  0        0       0
00:14:41.351  
00:14:41.351   10:52:30	-- cuse/spdk_smartctl_cuse.sh@70 -- # smartctl -d nvme -A /dev/spdk/nvme0
00:14:41.351  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  [2024-12-15 10:52:30.274603] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.351  === START OF SMART DATA SECTION ===
00:14:41.351  SMART/Health Information (NVMe Log 0x02)
00:14:41.351  Critical Warning:                   0x00
00:14:41.351  Temperature:                        37 Celsius
00:14:41.351  Available Spare:                    99%
00:14:41.351  Available Spare Threshold:          10%
00:14:41.351  Percentage Used:                    32%
00:14:41.351  Data Units Read:                    628,379,983 [321 TB]
00:14:41.351  Data Units Written:                 790,799,418 [404 TB]
00:14:41.351  Host Read Commands:                 36,986,167,818
00:14:41.351  Host Write Commands:                42,949,937,725
00:14:41.351  Controller Busy Time:               3,917
00:14:41.351  Power Cycles:                       31
00:14:41.351  Power On Hours:                     20,842
00:14:41.351  Unsafe Shutdowns:                   46
00:14:41.351  Media and Data Integrity Errors:    0
00:14:41.351  Error Information Log Entries:      38,669
00:14:41.351  Warning  Comp. Temperature Time:    2198
00:14:41.351  Critical Comp. Temperature Time:    0
00:14:41.351  
00:14:41.351   10:52:30	-- cuse/spdk_smartctl_cuse.sh@73 -- # smartctl -d nvme -x /dev/spdk/nvme0
00:14:41.351  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.351  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.351  
00:14:41.351  [2024-12-15 10:52:30.338405] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.351  === START OF INFORMATION SECTION ===
00:14:41.351  Model Number:                       INTEL SSDPE2KX040T8
00:14:41.351  Serial Number:                      BTLJ83030AK84P0DGN
00:14:41.351  Firmware Version:                   VDV10184
00:14:41.351  PCI Vendor/Subsystem ID:            0x8086
00:14:41.351  IEEE OUI Identifier:                0x5cd2e4
00:14:41.351  Total NVM Capacity:                 4,000,787,030,016 [4.00 TB]
00:14:41.351  Unallocated NVM Capacity:           0
00:14:41.351  Controller ID:                      0
00:14:41.351  NVMe Version:                       1.2
00:14:41.351  Number of Namespaces:               128
00:14:41.351  Local Time is:                      Sun Dec 15 10:52:30 2024 CET
00:14:41.351  Firmware Updates (0x18):            4 Slots, no Reset required
00:14:41.351  Optional Admin Commands (0x000e):   Format Frmw_DL NS_Mngmt
00:14:41.351  Optional NVM Commands (0x0006):     Wr_Unc DS_Mngmt
00:14:41.351  Log Page Attributes (0x0e):         Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
00:14:41.351  Maximum Data Transfer Size:         32 Pages
00:14:41.351  Warning  Comp. Temp. Threshold:     70 Celsius
00:14:41.351  Critical Comp. Temp. Threshold:     80 Celsius
00:14:41.351  
00:14:41.351  Supported Power States
00:14:41.351  St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
00:14:41.351   0 +    20.00W       -        -    0  0  0  0        0       0
00:14:41.351  
00:14:41.351  === START OF SMART DATA SECTION ===
00:14:41.610  SMART overall-health self-assessment test result: PASSED
00:14:41.610  
00:14:41.610  SMART/Health Information (NVMe Log 0x02)
00:14:41.610  Critical Warning:                   0x00
00:14:41.610  Temperature:                        37 Celsius
00:14:41.610  Available Spare:                    99%
00:14:41.610  Available Spare Threshold:          10%
00:14:41.610  Percentage Used:                    32%
00:14:41.610  Data Units Read:                    628,379,983 [321 TB]
00:14:41.610  Data Units Written:                 790,799,418 [404 TB]
00:14:41.610  Host Read Commands:                 36,986,167,818
00:14:41.610  Host Write Commands:                42,949,937,725
00:14:41.610  Controller Busy Time:               3,917
00:14:41.610  Power Cycles:                       31
00:14:41.610  Power On Hours:                     20,842
00:14:41.610  Unsafe Shutdowns:                   46
00:14:41.610  Media and Data Integrity Errors:    0
00:14:41.610  Error Information Log Entries:      38,669
00:14:41.610  Warning  Comp. Temperature Time:    2198
00:14:41.610  Critical Comp. Temperature Time:    0
00:14:41.610  
00:14:41.610  Error Information (NVMe Log 0x01, 16 of 64 entries)
00:14:41.610  Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS  Message
00:14:41.610    0      38669     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    1      38668     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    2      38667     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    3      38666     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    4      38665     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    5      38664     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    6      38663     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    7      38662     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    8      38661     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610    9      38660     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   10      38659     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   11      38658     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   12      38657     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   13      38656     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   14      38655     0       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610   15      38654     2       -  0xc00c      -            0     -     -  Internal Error
00:14:41.610  ... (48 entries not read)
00:14:41.610  
00:14:41.610  Self-tests not supported
00:14:41.610  
00:14:41.610   10:52:30	-- cuse/spdk_smartctl_cuse.sh@74 -- # smartctl -d nvme -H /dev/spdk/nvme0
00:14:41.610  smartctl 7.4 2023-08-01 r5530 [x86_64-linux-6.8.9-200.fc39.x86_64] (local build)
00:14:41.611  Copyright (C) 2002-23, Bruce Allen, Christian Franke, www.smartmontools.org
00:14:41.611  
00:14:41.611  [2024-12-15 10:52:30.429016] nvme_cuse.c: 797:cuse_ctrlr_ioctl: *ERROR*: Unsupported IOCTL 0x4E40.
00:14:41.611  === START OF SMART DATA SECTION ===
00:14:41.611  SMART overall-health self-assessment test result: PASSED
00:14:41.611  
00:14:41.611   10:52:30	-- cuse/spdk_smartctl_cuse.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:14:46.886   10:52:35	-- cuse/spdk_smartctl_cuse.sh@77 -- # sleep 1
00:14:47.145   10:52:36	-- cuse/spdk_smartctl_cuse.sh@78 -- # '[' -c /dev/spdk/nvme1 ']'
00:14:47.145   10:52:36	-- cuse/spdk_smartctl_cuse.sh@82 -- # trap - SIGINT SIGTERM EXIT
00:14:47.145   10:52:36	-- cuse/spdk_smartctl_cuse.sh@83 -- # killprocess 2159657
00:14:47.145   10:52:36	-- common/autotest_common.sh@936 -- # '[' -z 2159657 ']'
00:14:47.145   10:52:36	-- common/autotest_common.sh@940 -- # kill -0 2159657
00:14:47.145    10:52:36	-- common/autotest_common.sh@941 -- # uname
00:14:47.145   10:52:36	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:14:47.145    10:52:36	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2159657
00:14:47.404   10:52:36	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:14:47.404   10:52:36	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:14:47.404   10:52:36	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2159657'
00:14:47.404  killing process with pid 2159657
00:14:47.404   10:52:36	-- common/autotest_common.sh@955 -- # kill 2159657
00:14:47.404   10:52:36	-- common/autotest_common.sh@960 -- # wait 2159657
00:14:47.974  
00:14:47.974  real	0m32.595s
00:14:47.974  user	0m34.823s
00:14:47.974  sys	0m7.451s
00:14:47.974   10:52:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:14:47.974   10:52:36	-- common/autotest_common.sh@10 -- # set +x
00:14:47.974  ************************************
00:14:47.974  END TEST nvme_smartctl_cuse
00:14:47.974  ************************************
00:14:47.974   10:52:36	-- cuse/nvme_cuse.sh@22 -- # run_test nvme_ns_manage_cuse /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh
00:14:47.974   10:52:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:14:47.974   10:52:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:14:47.974   10:52:36	-- common/autotest_common.sh@10 -- # set +x
00:14:47.974  ************************************
00:14:47.974  START TEST nvme_ns_manage_cuse
00:14:47.974  ************************************
00:14:47.974   10:52:36	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse/nvme_ns_manage_cuse.sh
00:14:47.974  * Looking for test storage...
00:14:47.974  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/cuse
00:14:47.974     10:52:36	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:14:47.974      10:52:36	-- common/autotest_common.sh@1690 -- # lcov --version
00:14:47.974      10:52:36	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:14:47.974     10:52:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:14:47.974     10:52:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:14:47.974     10:52:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:14:47.974     10:52:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:14:47.974     10:52:36	-- scripts/common.sh@335 -- # IFS=.-:
00:14:47.974     10:52:36	-- scripts/common.sh@335 -- # read -ra ver1
00:14:47.974     10:52:36	-- scripts/common.sh@336 -- # IFS=.-:
00:14:47.974     10:52:36	-- scripts/common.sh@336 -- # read -ra ver2
00:14:47.974     10:52:36	-- scripts/common.sh@337 -- # local 'op=<'
00:14:47.974     10:52:36	-- scripts/common.sh@339 -- # ver1_l=2
00:14:47.974     10:52:36	-- scripts/common.sh@340 -- # ver2_l=1
00:14:47.974     10:52:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:14:47.974     10:52:36	-- scripts/common.sh@343 -- # case "$op" in
00:14:47.974     10:52:36	-- scripts/common.sh@344 -- # : 1
00:14:47.974     10:52:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:14:47.975     10:52:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:47.975      10:52:36	-- scripts/common.sh@364 -- # decimal 1
00:14:47.975      10:52:36	-- scripts/common.sh@352 -- # local d=1
00:14:47.975      10:52:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:47.975      10:52:36	-- scripts/common.sh@354 -- # echo 1
00:14:47.975     10:52:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:14:47.975      10:52:36	-- scripts/common.sh@365 -- # decimal 2
00:14:47.975      10:52:36	-- scripts/common.sh@352 -- # local d=2
00:14:47.975      10:52:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:47.975      10:52:36	-- scripts/common.sh@354 -- # echo 2
00:14:47.975     10:52:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:14:47.975     10:52:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:14:47.975     10:52:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:14:47.975     10:52:36	-- scripts/common.sh@367 -- # return 0
00:14:47.975     10:52:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:47.975     10:52:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:14:47.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.975  		--rc genhtml_branch_coverage=1
00:14:47.975  		--rc genhtml_function_coverage=1
00:14:47.975  		--rc genhtml_legend=1
00:14:47.975  		--rc geninfo_all_blocks=1
00:14:47.975  		--rc geninfo_unexecuted_blocks=1
00:14:47.975  		
00:14:47.975  		'
00:14:47.975     10:52:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:14:47.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.975  		--rc genhtml_branch_coverage=1
00:14:47.975  		--rc genhtml_function_coverage=1
00:14:47.975  		--rc genhtml_legend=1
00:14:47.975  		--rc geninfo_all_blocks=1
00:14:47.975  		--rc geninfo_unexecuted_blocks=1
00:14:47.975  		
00:14:47.975  		'
00:14:47.975     10:52:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:14:47.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.975  		--rc genhtml_branch_coverage=1
00:14:47.975  		--rc genhtml_function_coverage=1
00:14:47.975  		--rc genhtml_legend=1
00:14:47.975  		--rc geninfo_all_blocks=1
00:14:47.975  		--rc geninfo_unexecuted_blocks=1
00:14:47.975  		
00:14:47.975  		'
00:14:47.975     10:52:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:14:47.975  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.975  		--rc genhtml_branch_coverage=1
00:14:47.975  		--rc genhtml_function_coverage=1
00:14:47.975  		--rc genhtml_legend=1
00:14:47.975  		--rc geninfo_all_blocks=1
00:14:47.975  		--rc geninfo_unexecuted_blocks=1
00:14:47.975  		
00:14:47.975  		'
00:14:47.975    10:52:36	-- cuse/common.sh@9 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:47.975       10:52:36	-- nvme/functions.sh@7 -- # dirname /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/functions.sh
00:14:47.975      10:52:36	-- nvme/functions.sh@7 -- # readlink -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/common/nvme/../../../
00:14:47.975     10:52:36	-- nvme/functions.sh@7 -- # rootdir=/var/jenkins/workspace/nvme-phy-autotest/spdk
00:14:47.975     10:52:36	-- nvme/functions.sh@8 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:14:47.975      10:52:36	-- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:47.975      10:52:36	-- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:47.975      10:52:36	-- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:47.975       10:52:36	-- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:47.975       10:52:36	-- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:47.975       10:52:36	-- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:47.975       10:52:36	-- paths/export.sh@5 -- # export PATH
00:14:47.975       10:52:36	-- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:47.975     10:52:36	-- nvme/functions.sh@10 -- # ctrls=()
00:14:47.975     10:52:36	-- nvme/functions.sh@10 -- # declare -A ctrls
00:14:47.975     10:52:36	-- nvme/functions.sh@11 -- # nvmes=()
00:14:47.975     10:52:36	-- nvme/functions.sh@11 -- # declare -A nvmes
00:14:47.975     10:52:36	-- nvme/functions.sh@12 -- # bdfs=()
00:14:47.975     10:52:36	-- nvme/functions.sh@12 -- # declare -A bdfs
00:14:47.975     10:52:36	-- nvme/functions.sh@13 -- # ordered_ctrls=()
00:14:47.975     10:52:36	-- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:14:47.975     10:52:36	-- nvme/functions.sh@14 -- # nvme_name=
00:14:47.975    10:52:36	-- cuse/common.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:14:47.975   10:52:36	-- cuse/nvme_ns_manage_cuse.sh@10 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:14:51.268  Waiting for block devices as requested
00:14:51.268  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:14:51.268  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:51.268  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:51.268  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:51.528  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:51.528  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:51.528  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:51.787  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:51.787  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:51.787  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:14:52.046  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:14:52.046  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:14:52.046  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:14:52.306  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:14:52.306  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:14:52.306  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:14:52.568  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:14:52.568   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@11 -- # scan_nvme_ctrls
00:14:52.568   10:52:41	-- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:14:52.568   10:52:41	-- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:14:52.568   10:52:41	-- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@49 -- # pci=0000:5e:00.0
00:14:52.568   10:52:41	-- nvme/functions.sh@50 -- # pci_can_use 0000:5e:00.0
00:14:52.568   10:52:41	-- scripts/common.sh@15 -- # local i
00:14:52.568   10:52:41	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:14:52.568   10:52:41	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:14:52.568   10:52:41	-- scripts/common.sh@24 -- # return 0
00:14:52.568   10:52:41	-- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:14:52.568   10:52:41	-- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:14:52.568   10:52:41	-- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@18 -- # shift
00:14:52.568   10:52:41	-- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568    10:52:41	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x8086"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[vid]=0x8086
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x8086 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x8086"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ssvid]=0x8086
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  BTLJ83030AK84P0DGN   ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[sn]="BTLJ83030AK84P0DGN  "'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[sn]='BTLJ83030AK84P0DGN  '
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  INTEL SSDPE2KX040T8                      ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mn]="INTEL SSDPE2KX040T8                     "'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mn]='INTEL SSDPE2KX040T8                     '
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  VDV10184 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fr]="VDV10184"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fr]=VDV10184
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rab]="0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rab]=0
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  5cd2e4 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="5cd2e4"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ieee]=5cd2e4
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  5 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="5"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mdts]=5
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x10200 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10200"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ver]=0x10200
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x989680 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0x989680"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rtd3r]=0x989680
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0xe4e1c0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0xe4e1c0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rtd3e]=0xe4e1c0
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x200 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x200"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[oaes]=0x200
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ctratt]=0
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.568   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.568   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.568   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:14:52.568    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[cntrltype]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mec]="1"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mec]=1
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0xe"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[oacs]=0xe
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[acl]=3
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x18 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x18"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[frmw]=0x18
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0xe ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0xe"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[lpa]=0xe
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  63 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="63"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[elpe]=63
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[npss]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  353 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="353"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[cctemp]=353
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="4,000,787,030,016"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[tnvmcap]=4,000,787,030,016
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[kas]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:14:52.569    10:52:41	-- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.569   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.569   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.569   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[pels]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[nn]="128"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[nn]=128
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x6 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x6"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[oncs]=0x6
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0x4"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fna]=0x4
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[vwc]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[awun]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ocfs]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[sgls]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n   ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]=""'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[subnqn]=
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0 ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0"'
00:14:52.570    10:52:41	-- nvme/functions.sh@23 -- # nvme0[ps0]='mp:20.00W operational enlat:0 exlat:0 rrt:0 rrl:0'
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.570   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.570   10:52:41	-- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:14:52.570   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n - ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:14:52.571   10:52:41	-- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:14:52.571   10:52:41	-- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:14:52.571   10:52:41	-- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:14:52.571   10:52:41	-- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@18 -- # shift
00:14:52.571   10:52:41	-- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571    10:52:41	-- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x1d1c0beb0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x1d1c0beb0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x1d1c0beb0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x1d1c0beb0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0x1d1c0beb0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x1d1c0beb0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x1d1c0beb0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="1"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=1
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[flbas]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[mc]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[dpc]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  4,000,787,030,016 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="4,000,787,030,016"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=4,000,787,030,016
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[mssrl]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[mcl]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[msrc]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  010000009f6e00000000000000000000 ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="010000009f6e00000000000000000000"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[nguid]=010000009f6e00000000000000000000
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.571   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.571   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  0000000000009f6e ]]
00:14:52.571   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000009f6e"'
00:14:52.571    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000009f6e
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.572   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0x2 (in use) ]]
00:14:52.572   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0x2 (in use)"'
00:14:52.572    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0x2 (in use)'
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.572   10:52:41	-- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0  ]]
00:14:52.572   10:52:41	-- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:0   lbads:12 rp:0 "'
00:14:52.572    10:52:41	-- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:0   lbads:12 rp:0 '
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # IFS=:
00:14:52.572   10:52:41	-- nvme/functions.sh@21 -- # read -r reg val
00:14:52.572   10:52:41	-- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:14:52.572   10:52:41	-- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:14:52.572   10:52:41	-- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:14:52.572   10:52:41	-- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:5e:00.0
00:14:52.572   10:52:41	-- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:14:52.572   10:52:41	-- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:14:52.572    10:52:41	-- cuse/nvme_ns_manage_cuse.sh@14 -- # get_nvme_with_ns_management
00:14:52.572    10:52:41	-- nvme/functions.sh@153 -- # local _ctrls
00:14:52.572    10:52:41	-- nvme/functions.sh@155 -- # _ctrls=($(get_nvmes_with_ns_management))
00:14:52.572     10:52:41	-- nvme/functions.sh@155 -- # get_nvmes_with_ns_management
00:14:52.572     10:52:41	-- nvme/functions.sh@144 -- # (( 1 == 0 ))
00:14:52.572     10:52:41	-- nvme/functions.sh@146 -- # local ctrl
00:14:52.572     10:52:41	-- nvme/functions.sh@147 -- # for ctrl in "${!ctrls[@]}"
00:14:52.572     10:52:41	-- nvme/functions.sh@148 -- # get_oacs nvme0 nsmgt
00:14:52.572     10:52:41	-- nvme/functions.sh@121 -- # local ctrl=nvme0 bit=nsmgt
00:14:52.572     10:52:41	-- nvme/functions.sh@122 -- # local -A bits
00:14:52.572     10:52:41	-- nvme/functions.sh@125 -- # bits["ss/sr"]=1
00:14:52.572     10:52:41	-- nvme/functions.sh@126 -- # bits["fnvme"]=2
00:14:52.572     10:52:41	-- nvme/functions.sh@127 -- # bits["fc/fi"]=4
00:14:52.572     10:52:41	-- nvme/functions.sh@128 -- # bits["nsmgt"]=8
00:14:52.572     10:52:41	-- nvme/functions.sh@129 -- # bits["self-test"]=16
00:14:52.572     10:52:41	-- nvme/functions.sh@130 -- # bits["directives"]=32
00:14:52.572     10:52:41	-- nvme/functions.sh@131 -- # bits["nvme-mi-s/r"]=64
00:14:52.572     10:52:41	-- nvme/functions.sh@132 -- # bits["virtmgt"]=128
00:14:52.572     10:52:41	-- nvme/functions.sh@133 -- # bits["doorbellbuf"]=256
00:14:52.572     10:52:41	-- nvme/functions.sh@134 -- # bits["getlba"]=512
00:14:52.572     10:52:41	-- nvme/functions.sh@135 -- # bits["commfeatlock"]=1024
00:14:52.572     10:52:41	-- nvme/functions.sh@137 -- # bit=nsmgt
00:14:52.572     10:52:41	-- nvme/functions.sh@138 -- # [[ -n 8 ]]
00:14:52.572      10:52:41	-- nvme/functions.sh@140 -- # get_nvme_ctrl_feature nvme0 oacs
00:14:52.572      10:52:41	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oacs
00:14:52.572      10:52:41	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:52.572      10:52:41	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:52.572      10:52:41	-- nvme/functions.sh@75 -- # [[ -n 0xe ]]
00:14:52.572      10:52:41	-- nvme/functions.sh@76 -- # echo 0xe
00:14:52.572     10:52:41	-- nvme/functions.sh@140 -- # (( 0xe & bits[nsmgt] ))
00:14:52.572     10:52:41	-- nvme/functions.sh@148 -- # echo nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@156 -- # (( 1 > 0 ))
00:14:52.572    10:52:41	-- nvme/functions.sh@157 -- # echo nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@158 -- # return 0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@14 -- # nvme_name=nvme0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@20 -- # nvme_dev=/dev/nvme0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@21 -- # bdf=0000:5e:00.0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@22 -- # nsids=($(get_nvme_nss "$nvme_name"))
00:14:52.572    10:52:41	-- cuse/nvme_ns_manage_cuse.sh@22 -- # get_nvme_nss nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@94 -- # local ctrl=nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@96 -- # [[ -n nvme0_ns ]]
00:14:52.572    10:52:41	-- nvme/functions.sh@97 -- # local -n _nss=nvme0_ns
00:14:52.572    10:52:41	-- nvme/functions.sh@99 -- # echo 1
00:14:52.572    10:52:41	-- cuse/nvme_ns_manage_cuse.sh@25 -- # get_nvme_ctrl_feature nvme0 oaes
00:14:52.572    10:52:41	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oaes
00:14:52.572    10:52:41	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:52.572    10:52:41	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@75 -- # [[ -n 0x200 ]]
00:14:52.572    10:52:41	-- nvme/functions.sh@76 -- # echo 0x200
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@25 -- # oaes=0x200
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@26 -- # aer_ns_change=0
00:14:52.572    10:52:41	-- cuse/nvme_ns_manage_cuse.sh@27 -- # get_nvme_ctrl_feature nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=cntlid
00:14:52.572    10:52:41	-- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:52.572    10:52:41	-- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:52.572    10:52:41	-- nvme/functions.sh@75 -- # [[ -n 0 ]]
00:14:52.572    10:52:41	-- nvme/functions.sh@76 -- # echo 0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@27 -- # cntlid=0
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@70 -- # remove_all_namespaces
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces'
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:14:52.572  ---
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces'
00:14:52.572  delete all namespaces
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:14:52.572  ---
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}"
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1'
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:14:52.572  ---
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1'
00:14:52.572  removing nsid=1
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:14:52.572  ---
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0
00:14:52.572  detach-ns: Success, nsid:1
00:14:52.572   10:52:41	-- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1
00:15:10.673  delete-ns: Success, deleted nsid:1
00:15:10.673   10:52:59	-- cuse/nvme_ns_manage_cuse.sh@72 -- # reset_nvme_if_aer_unsupported /dev/nvme0
00:15:10.673   10:52:59	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:15:10.673   10:52:59	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:15:11.612   10:53:00	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:15:11.871   10:53:00	-- cuse/nvme_ns_manage_cuse.sh@73 -- # sleep 1
00:15:12.810   10:53:01	-- cuse/nvme_ns_manage_cuse.sh@75 -- # PCI_ALLOWED=0000:5e:00.0
00:15:12.810   10:53:01	-- cuse/nvme_ns_manage_cuse.sh@75 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:15:16.128  0000:00:04.0 (8086 2021): Skipping denied controller at 0000:00:04.0
00:15:16.128  0000:00:04.1 (8086 2021): Skipping denied controller at 0000:00:04.1
00:15:16.128  0000:00:04.2 (8086 2021): Skipping denied controller at 0000:00:04.2
00:15:16.128  0000:00:04.3 (8086 2021): Skipping denied controller at 0000:00:04.3
00:15:16.128  0000:00:04.4 (8086 2021): Skipping denied controller at 0000:00:04.4
00:15:16.128  0000:00:04.5 (8086 2021): Skipping denied controller at 0000:00:04.5
00:15:16.128  0000:00:04.6 (8086 2021): Skipping denied controller at 0000:00:04.6
00:15:16.128  0000:00:04.7 (8086 2021): Skipping denied controller at 0000:00:04.7
00:15:16.128  0000:80:04.0 (8086 2021): Skipping denied controller at 0000:80:04.0
00:15:16.128  0000:80:04.1 (8086 2021): Skipping denied controller at 0000:80:04.1
00:15:16.128  0000:80:04.2 (8086 2021): Skipping denied controller at 0000:80:04.2
00:15:16.128  0000:80:04.3 (8086 2021): Skipping denied controller at 0000:80:04.3
00:15:16.128  0000:80:04.4 (8086 2021): Skipping denied controller at 0000:80:04.4
00:15:16.128  0000:80:04.5 (8086 2021): Skipping denied controller at 0000:80:04.5
00:15:16.128  0000:80:04.6 (8086 2021): Skipping denied controller at 0000:80:04.6
00:15:16.128  0000:80:04.7 (8086 2021): Skipping denied controller at 0000:80:04.7
00:15:19.421  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:15:19.421   10:53:07	-- cuse/nvme_ns_manage_cuse.sh@78 -- # spdk_tgt_pid=2165129
00:15:19.421   10:53:07	-- cuse/nvme_ns_manage_cuse.sh@77 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:15:19.421   10:53:07	-- cuse/nvme_ns_manage_cuse.sh@79 -- # trap 'kill -9 ${spdk_tgt_pid}; clean_up; exit 1' SIGINT SIGTERM EXIT
00:15:19.421   10:53:07	-- cuse/nvme_ns_manage_cuse.sh@81 -- # waitforlisten 2165129
00:15:19.421   10:53:07	-- common/autotest_common.sh@829 -- # '[' -z 2165129 ']'
00:15:19.421   10:53:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:19.421   10:53:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:19.421   10:53:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:19.421  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:19.421   10:53:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:19.421   10:53:07	-- common/autotest_common.sh@10 -- # set +x
00:15:19.421  [2024-12-15 10:53:07.940902] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:19.421  [2024-12-15 10:53:07.940975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165129 ]
00:15:19.421  EAL: No free 2048 kB hugepages reported on node 1
00:15:19.421  [2024-12-15 10:53:08.047906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:19.421  [2024-12-15 10:53:08.153608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:19.421  [2024-12-15 10:53:08.153813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:19.421  [2024-12-15 10:53:08.153818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:19.421  [2024-12-15 10:53:08.342329] 'OCF_Core' volume operations registered
00:15:19.421  [2024-12-15 10:53:08.345526] 'OCF_Cache' volume operations registered
00:15:19.421  [2024-12-15 10:53:08.349124] 'OCF Composite' volume operations registered
00:15:19.421  [2024-12-15 10:53:08.352349] 'SPDK_block_device' volume operations registered
00:15:19.989   10:53:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:19.989   10:53:08	-- common/autotest_common.sh@862 -- # return 0
00:15:19.989   10:53:08	-- cuse/nvme_ns_manage_cuse.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:15:23.281  
00:15:23.281   10:53:11	-- cuse/nvme_ns_manage_cuse.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_cuse_register -n Nvme0
00:15:23.281  [2024-12-15 10:53:12.097282] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:15:23.281  [2024-12-15 10:53:12.097446] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:15:23.281   10:53:12	-- cuse/nvme_ns_manage_cuse.sh@86 -- # ctrlr=/dev/spdk/nvme0
00:15:23.281   10:53:12	-- cuse/nvme_ns_manage_cuse.sh@88 -- # sleep 1
00:15:24.219   10:53:13	-- cuse/nvme_ns_manage_cuse.sh@89 -- # [[ -c /dev/spdk/nvme0 ]]
00:15:24.219   10:53:13	-- cuse/nvme_ns_manage_cuse.sh@94 -- # sleep 1
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@96 -- # for nsid in "${nsids[@]}"
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@97 -- # info_print 'create ns: nsze=10000 ncap=10000 flbias=0'
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:25.158  ---
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'create ns: nsze=10000 ncap=10000 flbias=0'
00:15:25.158  create ns: nsze=10000 ncap=10000 flbias=0
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:25.158  ---
00:15:25.158   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@98 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/spdk/nvme0 -s 10000 -c 10000 -f 0
00:15:25.727  create-ns: Success, created nsid:1
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@99 -- # info_print 'attach ns: nsid=1 controller=0'
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:25.727  ---
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'attach ns: nsid=1 controller=0'
00:15:25.727  attach ns: nsid=1 controller=0
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:25.727  ---
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@100 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/spdk/nvme0 -n 1 -c 0
00:15:25.727  attach-ns: Success, nsid:1
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@101 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:15:25.727   10:53:14	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:15:27.107   10:53:15	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:15:27.107  [2024-12-15 10:53:15.718157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:15:27.107  [2024-12-15 10:53:15.719127] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:15:27.107   10:53:15	-- cuse/nvme_ns_manage_cuse.sh@102 -- # sleep 1
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@103 -- # [[ -c /dev/spdk/nvme0n1 ]]
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@104 -- # info_print 'detach ns: nsid=1 controller=0'
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:28.045  ---
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'detach ns: nsid=1 controller=0'
00:15:28.045  detach ns: nsid=1 controller=0
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:28.045  ---
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@105 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/spdk/nvme0 -n 1 -c 0
00:15:28.045  detach-ns: Success, nsid:1
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@106 -- # info_print 'delete ns: nsid=1'
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:28.045  ---
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete ns: nsid=1'
00:15:28.045  delete ns: nsid=1
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:28.045  ---
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@107 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/spdk/nvme0 -n 1
00:15:28.045  delete-ns: Success, deleted nsid:1
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@108 -- # reset_nvme_if_aer_unsupported /dev/spdk/nvme0
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@30 -- # [[ 0 -eq 0 ]]
00:15:28.045   10:53:16	-- cuse/nvme_ns_manage_cuse.sh@31 -- # sleep 1
00:15:28.983   10:53:17	-- cuse/nvme_ns_manage_cuse.sh@32 -- # /usr/local/src/nvme-cli/nvme reset /dev/spdk/nvme0
00:15:28.983  [2024-12-15 10:53:17.784178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:5e:00.0] resetting controller
00:15:29.243   10:53:18	-- cuse/nvme_ns_manage_cuse.sh@109 -- # sleep 1
00:15:30.227   10:53:19	-- cuse/nvme_ns_manage_cuse.sh@110 -- # [[ ! -c /dev/spdk/nvme0n1 ]]
00:15:30.227   10:53:19	-- cuse/nvme_ns_manage_cuse.sh@118 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:15:34.509   10:53:23	-- cuse/nvme_ns_manage_cuse.sh@120 -- # sleep 1
00:15:35.448   10:53:24	-- cuse/nvme_ns_manage_cuse.sh@121 -- # [[ ! -c /dev/spdk/nvme0 ]]
00:15:35.448   10:53:24	-- cuse/nvme_ns_manage_cuse.sh@123 -- # trap - SIGINT SIGTERM EXIT
00:15:35.448   10:53:24	-- cuse/nvme_ns_manage_cuse.sh@124 -- # killprocess 2165129
00:15:35.448   10:53:24	-- common/autotest_common.sh@936 -- # '[' -z 2165129 ']'
00:15:35.448   10:53:24	-- common/autotest_common.sh@940 -- # kill -0 2165129
00:15:35.448    10:53:24	-- common/autotest_common.sh@941 -- # uname
00:15:35.448   10:53:24	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:15:35.448    10:53:24	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2165129
00:15:35.707   10:53:24	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:15:35.707   10:53:24	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:15:35.707   10:53:24	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2165129'
00:15:35.707  killing process with pid 2165129
00:15:35.707   10:53:24	-- common/autotest_common.sh@955 -- # kill 2165129
00:15:35.707   10:53:24	-- common/autotest_common.sh@960 -- # wait 2165129
00:15:36.279   10:53:25	-- cuse/nvme_ns_manage_cuse.sh@125 -- # clean_up
00:15:36.279   10:53:25	-- cuse/nvme_ns_manage_cuse.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:15:38.818  Waiting for block devices as requested
00:15:39.077  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:15:39.077  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:15:39.077  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:15:39.077  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:15:39.077  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:15:39.077  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:15:39.337  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:15:44.619  * Events for some block/disk devices (0000:5e:00.0) were not caught, they may be missing
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@48 -- # remove_all_namespaces
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@37 -- # info_print 'delete all namespaces'
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:44.619  ---
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'delete all namespaces'
00:15:44.619  delete all namespaces
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:44.619  ---
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@39 -- # for nsid in "${nsids[@]}"
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@40 -- # info_print 'removing nsid=1'
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@64 -- # echo ---
00:15:44.619  ---
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@65 -- # echo 'removing nsid=1'
00:15:44.619  removing nsid=1
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@66 -- # echo ---
00:15:44.619  ---
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@41 -- # /usr/local/src/nvme-cli/nvme detach-ns /dev/nvme0 -n 1 -c 0
00:15:44.619  NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@41 -- # true
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@42 -- # /usr/local/src/nvme-cli/nvme delete-ns /dev/nvme0 -n 1
00:15:44.619  NVMe status: Invalid Field in Command: A reserved coded value or an unsupported value in a defined field(0x4002)
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@42 -- # true
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@50 -- # echo 'Restoring /dev/nvme0...'
00:15:44.619  Restoring /dev/nvme0...
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@51 -- # for nsid in "${nsids[@]}"
00:15:44.619    10:53:32	-- cuse/nvme_ns_manage_cuse.sh@52 -- # get_nvme_ns_feature nvme0 1 ncap
00:15:44.619    10:53:32	-- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=ncap
00:15:44.619    10:53:32	-- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns
00:15:44.619    10:53:32	-- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@87 -- # local -n _ns=nvme0n1
00:15:44.619    10:53:32	-- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@90 -- # echo 0x1d1c0beb0
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@52 -- # ncap=0x1d1c0beb0
00:15:44.619    10:53:32	-- cuse/nvme_ns_manage_cuse.sh@53 -- # get_nvme_ns_feature nvme0 1 nsze
00:15:44.619    10:53:32	-- nvme/functions.sh@80 -- # local ctrl=nvme0 ns=1 reg=nsze
00:15:44.619    10:53:32	-- nvme/functions.sh@82 -- # [[ -n nvme0_ns ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@84 -- # local -n _nss=nvme0_ns
00:15:44.619    10:53:32	-- nvme/functions.sh@85 -- # [[ -n nvme0n1 ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@87 -- # local -n _ns=nvme0n1
00:15:44.619    10:53:32	-- nvme/functions.sh@89 -- # [[ -n 0x1d1c0beb0 ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@90 -- # echo 0x1d1c0beb0
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@53 -- # nsze=0x1d1c0beb0
00:15:44.619    10:53:32	-- cuse/nvme_ns_manage_cuse.sh@54 -- # get_active_lbaf nvme0 1
00:15:44.619    10:53:32	-- nvme/functions.sh@103 -- # local ctrl=nvme0 ns=1 reg lbaf
00:15:44.619    10:53:32	-- nvme/functions.sh@105 -- # [[ -n nvme0_ns ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@107 -- # local -n _nss=nvme0_ns
00:15:44.619    10:53:32	-- nvme/functions.sh@108 -- # [[ -n nvme0n1 ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@110 -- # local -n _ns=nvme0n1
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ fpi == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nawupf == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nsfeat == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ endgid == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nawun == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nabspf == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nabo == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nabsn == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nulbaf == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ ncap == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ dpc == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ dps == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nguid == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ noiob == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nacwu == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ mssrl == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ dlfeat == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nlbaf == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ mc == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nmic == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ nvmsetid == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # continue
00:15:44.619    10:53:32	-- nvme/functions.sh@112 -- # for reg in "${!_ns[@]}"
00:15:44.619    10:53:32	-- nvme/functions.sh@113 -- # [[ lbaf0 == lbaf* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@114 -- # [[ ms:0   lbads:9  rp:0x2 (in use) == *\i\n\ \u\s\e* ]]
00:15:44.619    10:53:32	-- nvme/functions.sh@115 -- # echo 0
00:15:44.619    10:53:32	-- nvme/functions.sh@115 -- # return 0
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@54 -- # lbaf=0
00:15:44.619   10:53:32	-- cuse/nvme_ns_manage_cuse.sh@55 -- # /usr/local/src/nvme-cli/nvme create-ns /dev/nvme0 -s 0x1d1c0beb0 -c 0x1d1c0beb0 -f 0
00:15:44.619  create-ns: Success, created nsid:1
00:15:44.619   10:53:33	-- cuse/nvme_ns_manage_cuse.sh@56 -- # /usr/local/src/nvme-cli/nvme attach-ns /dev/nvme0 -n 1 -c 0
00:15:44.620  attach-ns: Success, nsid:1
00:15:44.620   10:53:33	-- cuse/nvme_ns_manage_cuse.sh@57 -- # /usr/local/src/nvme-cli/nvme reset /dev/nvme0
00:15:44.620   10:53:33	-- cuse/nvme_ns_manage_cuse.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:15:47.910  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:15:47.910  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:15:48.169  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:15:48.169  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:15:48.169  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:15:48.169  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:15:51.462  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:15:51.462  
00:15:51.462  real	1m3.402s
00:15:51.462  user	0m37.704s
00:15:51.462  sys	0m10.018s
00:15:51.462   10:53:40	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:51.462   10:53:40	-- common/autotest_common.sh@10 -- # set +x
00:15:51.462  ************************************
00:15:51.462  END TEST nvme_ns_manage_cuse
00:15:51.462  ************************************
00:15:51.462   10:53:40	-- cuse/nvme_cuse.sh@23 -- # rmmod cuse
00:15:51.462   10:53:40	-- cuse/nvme_cuse.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:15:54.756  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:15:54.756  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:15:54.756  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:15:54.756  
00:15:54.756  real	2m58.492s
00:15:54.756  user	2m28.006s
00:15:54.756  sys	0m34.769s
00:15:54.756   10:53:43	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:15:54.756   10:53:43	-- common/autotest_common.sh@10 -- # set +x
00:15:54.756  ************************************
00:15:54.756  END TEST nvme_cuse
00:15:54.756  ************************************
00:15:54.756   10:53:43	-- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]]
00:15:54.756   10:53:43	-- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]]
00:15:54.756   10:53:43	-- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]]
00:15:54.756   10:53:43	-- spdk/autotest.sh@233 -- # run_test nvme_rpc /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh
00:15:54.756   10:53:43	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:15:54.756   10:53:43	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:15:54.756   10:53:43	-- common/autotest_common.sh@10 -- # set +x
00:15:54.756  ************************************
00:15:54.756  START TEST nvme_rpc
00:15:54.756  ************************************
00:15:54.756   10:53:43	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc.sh
00:15:54.756  * Looking for test storage...
00:15:54.756  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:15:54.756    10:53:43	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:15:54.756     10:53:43	-- common/autotest_common.sh@1690 -- # lcov --version
00:15:54.756     10:53:43	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:15:54.756    10:53:43	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:15:54.756    10:53:43	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:15:54.756    10:53:43	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:15:54.756    10:53:43	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:15:54.756    10:53:43	-- scripts/common.sh@335 -- # IFS=.-:
00:15:54.756    10:53:43	-- scripts/common.sh@335 -- # read -ra ver1
00:15:54.756    10:53:43	-- scripts/common.sh@336 -- # IFS=.-:
00:15:54.756    10:53:43	-- scripts/common.sh@336 -- # read -ra ver2
00:15:54.756    10:53:43	-- scripts/common.sh@337 -- # local 'op=<'
00:15:54.756    10:53:43	-- scripts/common.sh@339 -- # ver1_l=2
00:15:54.756    10:53:43	-- scripts/common.sh@340 -- # ver2_l=1
00:15:54.756    10:53:43	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:15:54.756    10:53:43	-- scripts/common.sh@343 -- # case "$op" in
00:15:54.756    10:53:43	-- scripts/common.sh@344 -- # : 1
00:15:54.756    10:53:43	-- scripts/common.sh@363 -- # (( v = 0 ))
00:15:54.756    10:53:43	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:54.756     10:53:43	-- scripts/common.sh@364 -- # decimal 1
00:15:54.756     10:53:43	-- scripts/common.sh@352 -- # local d=1
00:15:54.756     10:53:43	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:54.756     10:53:43	-- scripts/common.sh@354 -- # echo 1
00:15:54.756    10:53:43	-- scripts/common.sh@364 -- # ver1[v]=1
00:15:54.756     10:53:43	-- scripts/common.sh@365 -- # decimal 2
00:15:54.756     10:53:43	-- scripts/common.sh@352 -- # local d=2
00:15:54.756     10:53:43	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:54.756     10:53:43	-- scripts/common.sh@354 -- # echo 2
00:15:54.756    10:53:43	-- scripts/common.sh@365 -- # ver2[v]=2
00:15:54.756    10:53:43	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:15:54.756    10:53:43	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:15:54.756    10:53:43	-- scripts/common.sh@367 -- # return 0
00:15:54.756    10:53:43	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:54.756    10:53:43	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:15:54.756  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.756  		--rc genhtml_branch_coverage=1
00:15:54.756  		--rc genhtml_function_coverage=1
00:15:54.756  		--rc genhtml_legend=1
00:15:54.756  		--rc geninfo_all_blocks=1
00:15:54.756  		--rc geninfo_unexecuted_blocks=1
00:15:54.756  		
00:15:54.756  		'
00:15:54.756    10:53:43	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:15:54.756  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.756  		--rc genhtml_branch_coverage=1
00:15:54.756  		--rc genhtml_function_coverage=1
00:15:54.756  		--rc genhtml_legend=1
00:15:54.756  		--rc geninfo_all_blocks=1
00:15:54.756  		--rc geninfo_unexecuted_blocks=1
00:15:54.756  		
00:15:54.756  		'
00:15:54.756    10:53:43	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:15:54.756  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.756  		--rc genhtml_branch_coverage=1
00:15:54.756  		--rc genhtml_function_coverage=1
00:15:54.756  		--rc genhtml_legend=1
00:15:54.756  		--rc geninfo_all_blocks=1
00:15:54.756  		--rc geninfo_unexecuted_blocks=1
00:15:54.756  		
00:15:54.756  		'
00:15:54.756    10:53:43	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:15:54.756  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.756  		--rc genhtml_branch_coverage=1
00:15:54.756  		--rc genhtml_function_coverage=1
00:15:54.756  		--rc genhtml_legend=1
00:15:54.756  		--rc geninfo_all_blocks=1
00:15:54.756  		--rc geninfo_unexecuted_blocks=1
00:15:54.756  		
00:15:54.756  		'
00:15:54.756   10:53:43	-- nvme/nvme_rpc.sh@11 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:15:54.756    10:53:43	-- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:15:54.756    10:53:43	-- common/autotest_common.sh@1519 -- # bdfs=()
00:15:54.756    10:53:43	-- common/autotest_common.sh@1519 -- # local bdfs
00:15:54.756    10:53:43	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:15:54.756     10:53:43	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:15:54.756     10:53:43	-- common/autotest_common.sh@1508 -- # bdfs=()
00:15:54.756     10:53:43	-- common/autotest_common.sh@1508 -- # local bdfs
00:15:54.756     10:53:43	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:54.756      10:53:43	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:15:54.756      10:53:43	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:15:54.756     10:53:43	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:15:54.756     10:53:43	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:15:54.756    10:53:43	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:15:54.756   10:53:43	-- nvme/nvme_rpc.sh@13 -- # bdf=0000:5e:00.0
00:15:54.756   10:53:43	-- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=2171974
00:15:54.756   10:53:43	-- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:15:54.756   10:53:43	-- nvme/nvme_rpc.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:15:54.757   10:53:43	-- nvme/nvme_rpc.sh@19 -- # waitforlisten 2171974
00:15:54.757   10:53:43	-- common/autotest_common.sh@829 -- # '[' -z 2171974 ']'
00:15:54.757   10:53:43	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:54.757   10:53:43	-- common/autotest_common.sh@834 -- # local max_retries=100
00:15:54.757   10:53:43	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:54.757  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:54.757   10:53:43	-- common/autotest_common.sh@838 -- # xtrace_disable
00:15:54.757   10:53:43	-- common/autotest_common.sh@10 -- # set +x
00:15:54.757  [2024-12-15 10:53:43.599106] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:15:54.757  [2024-12-15 10:53:43.599178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2171974 ]
00:15:54.757  EAL: No free 2048 kB hugepages reported on node 1
00:15:54.757  [2024-12-15 10:53:43.706496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:55.016  [2024-12-15 10:53:43.808945] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:15:55.016  [2024-12-15 10:53:43.809197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:15:55.016  [2024-12-15 10:53:43.809203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:15:55.016  [2024-12-15 10:53:44.007475] 'OCF_Core' volume operations registered
00:15:55.016  [2024-12-15 10:53:44.010977] 'OCF_Cache' volume operations registered
00:15:55.016  [2024-12-15 10:53:44.014961] 'OCF Composite' volume operations registered
00:15:55.016  [2024-12-15 10:53:44.018437] 'SPDK_block_device' volume operations registered
00:15:55.585   10:53:44	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:15:55.585   10:53:44	-- common/autotest_common.sh@862 -- # return 0
00:15:55.585   10:53:44	-- nvme/nvme_rpc.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:15:58.877  Nvme0n1
00:15:58.877   10:53:47	-- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:15:58.877   10:53:47	-- nvme/nvme_rpc.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:15:58.877  request:
00:15:58.877  {
00:15:58.877    "filename": "non_existing_file",
00:15:58.877    "bdev_name": "Nvme0n1",
00:15:58.877    "method": "bdev_nvme_apply_firmware",
00:15:58.877    "req_id": 1
00:15:58.877  }
00:15:58.877  Got JSON-RPC error response
00:15:58.877  response:
00:15:58.877  {
00:15:58.877    "code": -32603,
00:15:58.877    "message": "open file failed."
00:15:58.877  }
00:15:58.877   10:53:47	-- nvme/nvme_rpc.sh@32 -- # rv=1
00:15:58.877   10:53:47	-- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:15:58.877   10:53:47	-- nvme/nvme_rpc.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:16:03.073   10:53:51	-- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:16:03.073   10:53:51	-- nvme/nvme_rpc.sh@40 -- # killprocess 2171974
00:16:03.073   10:53:51	-- common/autotest_common.sh@936 -- # '[' -z 2171974 ']'
00:16:03.073   10:53:51	-- common/autotest_common.sh@940 -- # kill -0 2171974
00:16:03.073    10:53:51	-- common/autotest_common.sh@941 -- # uname
00:16:03.073   10:53:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:03.073    10:53:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2171974
00:16:03.073   10:53:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:03.073   10:53:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:03.073   10:53:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2171974'
00:16:03.073  killing process with pid 2171974
00:16:03.073   10:53:51	-- common/autotest_common.sh@955 -- # kill 2171974
00:16:03.073   10:53:51	-- common/autotest_common.sh@960 -- # wait 2171974
00:16:03.641  
00:16:03.641  real	0m9.110s
00:16:03.641  user	0m17.177s
00:16:03.641  sys	0m0.917s
00:16:03.641   10:53:52	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:03.641   10:53:52	-- common/autotest_common.sh@10 -- # set +x
00:16:03.641  ************************************
00:16:03.641  END TEST nvme_rpc
00:16:03.641  ************************************
00:16:03.641   10:53:52	-- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh
00:16:03.641   10:53:52	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:16:03.641   10:53:52	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:03.641   10:53:52	-- common/autotest_common.sh@10 -- # set +x
00:16:03.641  ************************************
00:16:03.641  START TEST nvme_rpc_timeouts
00:16:03.641  ************************************
00:16:03.641   10:53:52	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/nvme_rpc_timeouts.sh
00:16:03.641  * Looking for test storage...
00:16:03.641  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:16:03.641    10:53:52	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:03.642     10:53:52	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:03.642     10:53:52	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:03.642    10:53:52	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:03.642    10:53:52	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:03.642    10:53:52	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:03.642    10:53:52	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:03.642    10:53:52	-- scripts/common.sh@335 -- # IFS=.-:
00:16:03.642    10:53:52	-- scripts/common.sh@335 -- # read -ra ver1
00:16:03.642    10:53:52	-- scripts/common.sh@336 -- # IFS=.-:
00:16:03.642    10:53:52	-- scripts/common.sh@336 -- # read -ra ver2
00:16:03.642    10:53:52	-- scripts/common.sh@337 -- # local 'op=<'
00:16:03.642    10:53:52	-- scripts/common.sh@339 -- # ver1_l=2
00:16:03.642    10:53:52	-- scripts/common.sh@340 -- # ver2_l=1
00:16:03.642    10:53:52	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:03.642    10:53:52	-- scripts/common.sh@343 -- # case "$op" in
00:16:03.642    10:53:52	-- scripts/common.sh@344 -- # : 1
00:16:03.642    10:53:52	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:03.642    10:53:52	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:03.642     10:53:52	-- scripts/common.sh@364 -- # decimal 1
00:16:03.642     10:53:52	-- scripts/common.sh@352 -- # local d=1
00:16:03.642     10:53:52	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:03.642     10:53:52	-- scripts/common.sh@354 -- # echo 1
00:16:03.642    10:53:52	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:03.642     10:53:52	-- scripts/common.sh@365 -- # decimal 2
00:16:03.642     10:53:52	-- scripts/common.sh@352 -- # local d=2
00:16:03.642     10:53:52	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:03.642     10:53:52	-- scripts/common.sh@354 -- # echo 2
00:16:03.642    10:53:52	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:03.642    10:53:52	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:03.642    10:53:52	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:03.642    10:53:52	-- scripts/common.sh@367 -- # return 0
00:16:03.642    10:53:52	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:03.642    10:53:52	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:03.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.642  		--rc genhtml_branch_coverage=1
00:16:03.642  		--rc genhtml_function_coverage=1
00:16:03.642  		--rc genhtml_legend=1
00:16:03.642  		--rc geninfo_all_blocks=1
00:16:03.642  		--rc geninfo_unexecuted_blocks=1
00:16:03.642  		
00:16:03.642  		'
00:16:03.642    10:53:52	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:03.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.642  		--rc genhtml_branch_coverage=1
00:16:03.642  		--rc genhtml_function_coverage=1
00:16:03.642  		--rc genhtml_legend=1
00:16:03.642  		--rc geninfo_all_blocks=1
00:16:03.642  		--rc geninfo_unexecuted_blocks=1
00:16:03.642  		
00:16:03.642  		'
00:16:03.642    10:53:52	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:03.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.642  		--rc genhtml_branch_coverage=1
00:16:03.642  		--rc genhtml_function_coverage=1
00:16:03.642  		--rc genhtml_legend=1
00:16:03.642  		--rc geninfo_all_blocks=1
00:16:03.642  		--rc geninfo_unexecuted_blocks=1
00:16:03.642  		
00:16:03.642  		'
00:16:03.642    10:53:52	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:03.642  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:03.642  		--rc genhtml_branch_coverage=1
00:16:03.642  		--rc genhtml_function_coverage=1
00:16:03.642  		--rc genhtml_legend=1
00:16:03.642  		--rc geninfo_all_blocks=1
00:16:03.642  		--rc geninfo_unexecuted_blocks=1
00:16:03.642  		
00:16:03.642  		'
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_2173097
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_2173097
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=2173209
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 2173209
00:16:03.642   10:53:52	-- common/autotest_common.sh@829 -- # '[' -z 2173209 ']'
00:16:03.642   10:53:52	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:03.642   10:53:52	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:03.642   10:53:52	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:03.642  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:03.642   10:53:52	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:03.642   10:53:52	-- nvme/nvme_rpc_timeouts.sh@24 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3
00:16:03.642   10:53:52	-- common/autotest_common.sh@10 -- # set +x
00:16:03.902  [2024-12-15 10:53:52.698954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:03.902  [2024-12-15 10:53:52.699095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2173209 ]
00:16:03.902  EAL: No free 2048 kB hugepages reported on node 1
00:16:03.902  [2024-12-15 10:53:52.863734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2
00:16:04.161  [2024-12-15 10:53:52.961877] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:04.161  [2024-12-15 10:53:52.962108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:16:04.161  [2024-12-15 10:53:52.962113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:04.161  [2024-12-15 10:53:53.161720] 'OCF_Core' volume operations registered
00:16:04.161  [2024-12-15 10:53:53.165205] 'OCF_Cache' volume operations registered
00:16:04.161  [2024-12-15 10:53:53.169137] 'OCF Composite' volume operations registered
00:16:04.161  [2024-12-15 10:53:53.172631] 'SPDK_block_device' volume operations registered
00:16:05.098   10:53:53	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:05.098   10:53:53	-- common/autotest_common.sh@862 -- # return 0
00:16:05.098   10:53:53	-- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:16:05.098  Checking default timeout settings:
00:16:05.098   10:53:53	-- nvme/nvme_rpc_timeouts.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config
00:16:05.358   10:53:54	-- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:16:05.358  Making settings changes with rpc:
00:16:05.358   10:53:54	-- nvme/nvme_rpc_timeouts.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:16:05.617   10:53:54	-- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:16:05.617  Check default vs. modified settings:
00:16:05.617   10:53:54	-- nvme/nvme_rpc_timeouts.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_config
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_2173097
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_2173097
00:16:05.876    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:16:05.876   10:53:54	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:16:05.876  Setting action_on_timeout is changed as expected.
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_2173097
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_2173097
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:16:05.877  Setting timeout_us is changed as expected.
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_2173097
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_2173097
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:16:05.877    10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:16:05.877  Setting timeout_admin_us is changed as expected.
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_2173097 /tmp/settings_modified_2173097
00:16:05.877   10:53:54	-- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 2173209
00:16:05.877   10:53:54	-- common/autotest_common.sh@936 -- # '[' -z 2173209 ']'
00:16:05.877   10:53:54	-- common/autotest_common.sh@940 -- # kill -0 2173209
00:16:05.877    10:53:54	-- common/autotest_common.sh@941 -- # uname
00:16:05.877   10:53:54	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:16:05.877    10:53:54	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2173209
00:16:06.136   10:53:54	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:16:06.136   10:53:54	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:16:06.136   10:53:54	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2173209'
00:16:06.136  killing process with pid 2173209
00:16:06.136   10:53:54	-- common/autotest_common.sh@955 -- # kill 2173209
00:16:06.136   10:53:54	-- common/autotest_common.sh@960 -- # wait 2173209
00:16:06.703   10:53:55	-- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:16:06.703  RPC TIMEOUT SETTING TEST PASSED.
00:16:06.703  
00:16:06.703  real	0m3.056s
00:16:06.703  user	0m6.116s
00:16:06.703  sys	0m0.935s
00:16:06.703   10:53:55	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:16:06.703   10:53:55	-- common/autotest_common.sh@10 -- # set +x
00:16:06.703  ************************************
00:16:06.703  END TEST nvme_rpc_timeouts
00:16:06.703  ************************************
00:16:06.703   10:53:55	-- spdk/autotest.sh@238 -- # '[' 0 -eq 0 ']'
00:16:06.703    10:53:55	-- spdk/autotest.sh@238 -- # uname -s
00:16:06.703   10:53:55	-- spdk/autotest.sh@238 -- # '[' Linux = Linux ']'
00:16:06.703   10:53:55	-- spdk/autotest.sh@239 -- # run_test sw_hotplug /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh
00:16:06.703   10:53:55	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:16:06.703   10:53:55	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:16:06.703   10:53:55	-- common/autotest_common.sh@10 -- # set +x
00:16:06.703  ************************************
00:16:06.703  START TEST sw_hotplug
00:16:06.703  ************************************
00:16:06.703   10:53:55	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh
00:16:06.703  * Looking for test storage...
00:16:06.703  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme
00:16:06.703    10:53:55	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:16:06.703     10:53:55	-- common/autotest_common.sh@1690 -- # lcov --version
00:16:06.703     10:53:55	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:16:06.703    10:53:55	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:16:06.703    10:53:55	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:16:06.703    10:53:55	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:16:06.703    10:53:55	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:16:06.703    10:53:55	-- scripts/common.sh@335 -- # IFS=.-:
00:16:06.703    10:53:55	-- scripts/common.sh@335 -- # read -ra ver1
00:16:06.703    10:53:55	-- scripts/common.sh@336 -- # IFS=.-:
00:16:06.703    10:53:55	-- scripts/common.sh@336 -- # read -ra ver2
00:16:06.703    10:53:55	-- scripts/common.sh@337 -- # local 'op=<'
00:16:06.703    10:53:55	-- scripts/common.sh@339 -- # ver1_l=2
00:16:06.703    10:53:55	-- scripts/common.sh@340 -- # ver2_l=1
00:16:06.703    10:53:55	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:16:06.703    10:53:55	-- scripts/common.sh@343 -- # case "$op" in
00:16:06.703    10:53:55	-- scripts/common.sh@344 -- # : 1
00:16:06.703    10:53:55	-- scripts/common.sh@363 -- # (( v = 0 ))
00:16:06.703    10:53:55	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:06.703     10:53:55	-- scripts/common.sh@364 -- # decimal 1
00:16:06.703     10:53:55	-- scripts/common.sh@352 -- # local d=1
00:16:06.703     10:53:55	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:06.703     10:53:55	-- scripts/common.sh@354 -- # echo 1
00:16:06.703    10:53:55	-- scripts/common.sh@364 -- # ver1[v]=1
00:16:06.703     10:53:55	-- scripts/common.sh@365 -- # decimal 2
00:16:06.703     10:53:55	-- scripts/common.sh@352 -- # local d=2
00:16:06.703     10:53:55	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:06.703     10:53:55	-- scripts/common.sh@354 -- # echo 2
00:16:06.703    10:53:55	-- scripts/common.sh@365 -- # ver2[v]=2
00:16:06.703    10:53:55	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:16:06.703    10:53:55	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:16:06.703    10:53:55	-- scripts/common.sh@367 -- # return 0
00:16:06.703    10:53:55	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:06.703    10:53:55	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:16:06.703  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:06.703  		--rc genhtml_branch_coverage=1
00:16:06.703  		--rc genhtml_function_coverage=1
00:16:06.703  		--rc genhtml_legend=1
00:16:06.703  		--rc geninfo_all_blocks=1
00:16:06.703  		--rc geninfo_unexecuted_blocks=1
00:16:06.703  		
00:16:06.703  		'
00:16:06.703    10:53:55	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:16:06.703  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:06.703  		--rc genhtml_branch_coverage=1
00:16:06.703  		--rc genhtml_function_coverage=1
00:16:06.703  		--rc genhtml_legend=1
00:16:06.703  		--rc geninfo_all_blocks=1
00:16:06.703  		--rc geninfo_unexecuted_blocks=1
00:16:06.703  		
00:16:06.703  		'
00:16:06.703    10:53:55	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:16:06.703  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:06.703  		--rc genhtml_branch_coverage=1
00:16:06.703  		--rc genhtml_function_coverage=1
00:16:06.703  		--rc genhtml_legend=1
00:16:06.703  		--rc geninfo_all_blocks=1
00:16:06.703  		--rc geninfo_unexecuted_blocks=1
00:16:06.703  		
00:16:06.703  		'
00:16:06.703    10:53:55	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:16:06.703  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:06.703  		--rc genhtml_branch_coverage=1
00:16:06.703  		--rc genhtml_function_coverage=1
00:16:06.703  		--rc genhtml_legend=1
00:16:06.703  		--rc geninfo_all_blocks=1
00:16:06.703  		--rc geninfo_unexecuted_blocks=1
00:16:06.703  		
00:16:06.703  		'
00:16:06.703   10:53:55	-- nvme/sw_hotplug.sh@122 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:16:09.996  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:16:09.996  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:16:09.996  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@125 -- # hotplug_events=3
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace))
00:16:09.996    10:53:58	-- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace
00:16:09.996    10:53:58	-- scripts/common.sh@311 -- # local bdf bdfs
00:16:09.996    10:53:58	-- scripts/common.sh@312 -- # local nvmes
00:16:09.996    10:53:58	-- scripts/common.sh@314 -- # [[ -n '' ]]
00:16:09.996    10:53:58	-- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:16:09.996     10:53:58	-- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02
00:16:09.996     10:53:58	-- scripts/common.sh@297 -- # local bdf=
00:16:09.996      10:53:58	-- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02
00:16:09.996      10:53:58	-- scripts/common.sh@232 -- # local class
00:16:09.996      10:53:58	-- scripts/common.sh@233 -- # local subclass
00:16:09.996      10:53:58	-- scripts/common.sh@234 -- # local progif
00:16:09.996       10:53:58	-- scripts/common.sh@235 -- # printf %02x 1
00:16:09.996      10:53:58	-- scripts/common.sh@235 -- # class=01
00:16:09.996       10:53:58	-- scripts/common.sh@236 -- # printf %02x 8
00:16:09.996      10:53:58	-- scripts/common.sh@236 -- # subclass=08
00:16:09.996       10:53:58	-- scripts/common.sh@237 -- # printf %02x 2
00:16:09.996      10:53:58	-- scripts/common.sh@237 -- # progif=02
00:16:09.996      10:53:58	-- scripts/common.sh@239 -- # hash lspci
00:16:09.996      10:53:58	-- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']'
00:16:09.996      10:53:58	-- scripts/common.sh@241 -- # lspci -mm -n -D
00:16:09.996      10:53:58	-- scripts/common.sh@242 -- # grep -i -- -p02
00:16:09.996      10:53:58	-- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:16:09.996      10:53:58	-- scripts/common.sh@244 -- # tr -d '"'
00:16:09.996     10:53:58	-- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@")
00:16:09.996     10:53:58	-- scripts/common.sh@300 -- # pci_can_use 0000:5e:00.0
00:16:09.996     10:53:58	-- scripts/common.sh@15 -- # local i
00:16:09.996     10:53:58	-- scripts/common.sh@18 -- # [[    =~  0000:5e:00.0  ]]
00:16:09.996     10:53:58	-- scripts/common.sh@22 -- # [[ -z '' ]]
00:16:09.996     10:53:58	-- scripts/common.sh@24 -- # return 0
00:16:09.996     10:53:58	-- scripts/common.sh@301 -- # echo 0000:5e:00.0
00:16:09.996    10:53:58	-- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}"
00:16:09.996    10:53:58	-- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]]
00:16:09.996     10:53:58	-- scripts/common.sh@322 -- # uname -s
00:16:09.996    10:53:58	-- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]]
00:16:09.996    10:53:58	-- scripts/common.sh@325 -- # bdfs+=("$bdf")
00:16:09.996    10:53:58	-- scripts/common.sh@327 -- # (( 1 ))
00:16:09.996    10:53:58	-- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@127 -- # nvme_count=1
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}")
00:16:09.996   10:53:58	-- nvme/sw_hotplug.sh@130 -- # xtrace_disable
00:16:09.996   10:53:58	-- common/autotest_common.sh@10 -- # set +x
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@135 -- # run_hotplug
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@73 -- # hotplug_pid=2176301
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning
00:16:13.290   10:54:01	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:16:13.290    10:54:01	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false
00:16:13.290    10:54:01	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:16:13.290    10:54:01	-- common/autotest_common.sh@708 -- # exec
00:16:13.290    10:54:01	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:16:13.290     10:54:01	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 false
00:16:13.290     10:54:01	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:16:13.290     10:54:01	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:16:13.290     10:54:01	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=false
00:16:13.290     10:54:01	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:16:13.290     10:54:01	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:16:13.290  EAL: No free 2048 kB hugepages reported on node 1
00:16:13.290  Initializing NVMe Controllers
00:16:13.859  Attaching to 0000:5e:00.0
00:16:16.399  Attached to 0000:5e:00.0
00:16:16.399  Initialization complete. Starting I/O...
00:16:16.399  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:16:16.399  
00:16:16.967  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       3200 I/Os completed (+3072)
00:16:16.967  
00:16:17.903  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       6400 I/Os completed (+3200)
00:16:17.904  
00:16:18.841     10:54:07	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:16:18.841     10:54:07	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:16:18.841     10:54:07	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:16:18.841  [2024-12-15 10:54:07.750162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:16:18.841  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:18.842  [2024-12-15 10:54:07.750219] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.750245] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.750260] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.750274] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:18.842  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:18.842  [2024-12-15 10:54:07.751393] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.751422] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.751437] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842  [2024-12-15 10:54:07.751451] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:18.842     10:54:07	-- nvme/sw_hotplug.sh@38 -- # false
00:16:18.842     10:54:07	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:16:18.842  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:5e:00.0/vendor
00:16:18.842  EAL: Scan for (pci) bus failed.
00:16:19.101     10:54:07	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:16:19.101     10:54:07	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:16:19.101     10:54:07	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:16:19.101  
00:16:20.039  
00:16:20.977  
00:16:21.917  
00:16:22.177     10:54:11	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:16:22.177     10:54:11	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:16:22.177     10:54:11	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:16:23.115  Attaching to 0000:5e:00.0
00:16:25.021  Attached to 0000:5e:00.0
00:16:25.021  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):          0 I/Os completed (+0)
00:16:25.021  
00:16:25.021  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:16:25.021  
00:16:25.281  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        256 I/Os completed (+128)
00:16:25.281  
00:16:25.849  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       2816 I/Os completed (+2560)
00:16:25.849  
00:16:27.228  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       6144 I/Os completed (+3328)
00:16:27.228  
00:16:28.167  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       9344 I/Os completed (+3200)
00:16:28.167  
00:16:28.167     10:54:17	-- nvme/sw_hotplug.sh@56 -- # false
00:16:28.167     10:54:17	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:16:28.167     10:54:17	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:16:28.167     10:54:17	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:16:28.498  [2024-12-15 10:54:17.189767] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:16:28.498  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:28.498  [2024-12-15 10:54:17.189808] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.189833] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.189847] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.189861] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:28.498  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:28.498  [2024-12-15 10:54:17.191120] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.191147] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.191162] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498  [2024-12-15 10:54:17.191177] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:28.498     10:54:17	-- nvme/sw_hotplug.sh@38 -- # false
00:16:28.498     10:54:17	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:16:28.498     10:54:17	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:16:28.498     10:54:17	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:16:28.498     10:54:17	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:16:29.092  
00:16:30.030  
00:16:30.969  
00:16:31.908     10:54:20	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:16:31.908     10:54:20	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:16:31.908     10:54:20	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:16:32.476  Attaching to 0000:5e:00.0
00:16:35.015  Attached to 0000:5e:00.0
00:16:35.015  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):          0 I/Os completed (+0)
00:16:35.015  
00:16:35.015  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        128 I/Os completed (+128)
00:16:35.015  
00:16:35.015  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):        256 I/Os completed (+128)
00:16:35.015  
00:16:35.015  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       1280 I/Os completed (+1024)
00:16:35.015  
00:16:35.952  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       4480 I/Os completed (+3200)
00:16:35.952  
00:16:36.890  INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  ):       7808 I/Os completed (+3328)
00:16:36.890  
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@56 -- # false
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:16:37.828  [2024-12-15 10:54:26.695444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:16:37.828  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:37.828  [2024-12-15 10:54:26.695484] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.695507] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.695526] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.695540] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  Controller removed: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:37.828  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:37.828  [2024-12-15 10:54:26.696700] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.696726] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.696741] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828  [2024-12-15 10:54:26.696756] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@38 -- # false
00:16:37.828     10:54:26	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:16:38.088     10:54:26	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:16:38.088     10:54:26	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:16:38.088     10:54:26	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:16:38.088  
00:16:39.026  
00:16:39.964  
00:16:40.902  
00:16:41.161     10:54:30	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:16:41.161     10:54:30	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:16:41.161     10:54:30	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:16:42.097  Attaching to 0000:5e:00.0
00:16:44.003  Attached to 0000:5e:00.0
00:16:44.003  unregister_dev: INTEL SSDPE2KX040T8  (BTLJ83030AK84P0DGN  )
00:16:47.295     10:54:36	-- nvme/sw_hotplug.sh@56 -- # false
00:16:47.295     10:54:36	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:16:47.295    10:54:36	-- common/autotest_common.sh@716 -- # time=34.46
00:16:47.295    10:54:36	-- common/autotest_common.sh@718 -- # echo 34.46
00:16:47.295   10:54:36	-- nvme/sw_hotplug.sh@16 -- # helper_time=34.46
00:16:47.295   10:54:36	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 34.46 1
00:16:47.295  remove_attach_helper took 34.46s to complete (handling 1 nvme drive(s)) 10:54:36	-- nvme/sw_hotplug.sh@79 -- # sleep 6
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@81 -- # kill -0 2176301
00:16:53.869  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (2176301) - No such process
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@83 -- # wait 2176301
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@95 -- # local dev
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=2181130
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@97 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt
00:16:53.869   10:54:42	-- nvme/sw_hotplug.sh@101 -- # waitforlisten 2181130
00:16:53.869   10:54:42	-- common/autotest_common.sh@829 -- # '[' -z 2181130 ']'
00:16:53.869   10:54:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:53.869   10:54:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:16:53.869   10:54:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:53.869  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:53.869   10:54:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:16:53.869   10:54:42	-- common/autotest_common.sh@10 -- # set +x
00:16:53.869  [2024-12-15 10:54:42.230916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:16:53.869  [2024-12-15 10:54:42.231001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181130 ]
00:16:53.869  EAL: No free 2048 kB hugepages reported on node 1
00:16:53.869  [2024-12-15 10:54:42.340856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:53.869  [2024-12-15 10:54:42.445429] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:16:53.869  [2024-12-15 10:54:42.445588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:16:53.869  [2024-12-15 10:54:42.640660] 'OCF_Core' volume operations registered
00:16:53.869  [2024-12-15 10:54:42.643857] 'OCF_Cache' volume operations registered
00:16:53.869  [2024-12-15 10:54:42.647456] 'OCF Composite' volume operations registered
00:16:53.869  [2024-12-15 10:54:42.650677] 'SPDK_block_device' volume operations registered
00:16:54.438   10:54:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:16:54.438   10:54:43	-- common/autotest_common.sh@862 -- # return 0
00:16:54.438   10:54:43	-- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}"
00:16:54.438   10:54:43	-- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:5e:00.0
00:16:54.438   10:54:43	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:54.438   10:54:43	-- common/autotest_common.sh@10 -- # set +x
00:16:57.731  Nvme00n1
00:16:57.731   10:54:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.731   10:54:46	-- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6
00:16:57.731   10:54:46	-- common/autotest_common.sh@897 -- # local bdev_name=Nvme00n1
00:16:57.731   10:54:46	-- common/autotest_common.sh@898 -- # local bdev_timeout=6
00:16:57.731   10:54:46	-- common/autotest_common.sh@899 -- # local i
00:16:57.731   10:54:46	-- common/autotest_common.sh@900 -- # [[ -z 6 ]]
00:16:57.731   10:54:46	-- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine
00:16:57.731   10:54:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.731   10:54:46	-- common/autotest_common.sh@10 -- # set +x
00:16:57.731   10:54:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.731   10:54:46	-- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6
00:16:57.731   10:54:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.731   10:54:46	-- common/autotest_common.sh@10 -- # set +x
00:16:57.731  [
00:16:57.731  {
00:16:57.731  "name": "Nvme00n1",
00:16:57.731  "aliases": [
00:16:57.731  "859465b8-fce2-4849-8b88-727ae93b4702"
00:16:57.731  ],
00:16:57.731  "product_name": "NVMe disk",
00:16:57.731  "block_size": 512,
00:16:57.731  "num_blocks": 7814037168,
00:16:57.731  "uuid": "859465b8-fce2-4849-8b88-727ae93b4702",
00:16:57.731  "assigned_rate_limits": {
00:16:57.731  "rw_ios_per_sec": 0,
00:16:57.731  "rw_mbytes_per_sec": 0,
00:16:57.731  "r_mbytes_per_sec": 0,
00:16:57.731  "w_mbytes_per_sec": 0
00:16:57.731  },
00:16:57.731  "claimed": false,
00:16:57.731  "zoned": false,
00:16:57.731  "supported_io_types": {
00:16:57.731  "read": true,
00:16:57.731  "write": true,
00:16:57.731  "unmap": true,
00:16:57.731  "write_zeroes": true,
00:16:57.731  "flush": true,
00:16:57.731  "reset": true,
00:16:57.731  "compare": false,
00:16:57.731  "compare_and_write": false,
00:16:57.731  "abort": true,
00:16:57.731  "nvme_admin": true,
00:16:57.731  "nvme_io": true
00:16:57.731  },
00:16:57.731  "driver_specific": {
00:16:57.731  "nvme": [
00:16:57.731  {
00:16:57.731  "pci_address": "0000:5e:00.0",
00:16:57.731  "trid": {
00:16:57.731  "trtype": "PCIe",
00:16:57.731  "traddr": "0000:5e:00.0"
00:16:57.731  },
00:16:57.731  "ctrlr_data": {
00:16:57.731  "cntlid": 0,
00:16:57.731  "vendor_id": "0x8086",
00:16:57.731  "model_number": "INTEL SSDPE2KX040T8",
00:16:57.731  "serial_number": "BTLJ83030AK84P0DGN",
00:16:57.731  "firmware_revision": "VDV10184",
00:16:57.731  "oacs": {
00:16:57.731  "security": 0,
00:16:57.731  "format": 1,
00:16:57.731  "firmware": 1,
00:16:57.731  "ns_manage": 1
00:16:57.731  },
00:16:57.731  "multi_ctrlr": false,
00:16:57.731  "ana_reporting": false
00:16:57.731  },
00:16:57.731  "vs": {
00:16:57.731  "nvme_version": "1.2"
00:16:57.731  },
00:16:57.731  "ns_data": {
00:16:57.731  "id": 1,
00:16:57.731  "can_share": false
00:16:57.731  }
00:16:57.731  }
00:16:57.731  ],
00:16:57.731  "mp_policy": "active_passive"
00:16:57.731  }
00:16:57.731  }
00:16:57.731  ]
00:16:57.731   10:54:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.731   10:54:46	-- common/autotest_common.sh@905 -- # return 0
00:16:57.731   10:54:46	-- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:16:57.731   10:54:46	-- common/autotest_common.sh@561 -- # xtrace_disable
00:16:57.731   10:54:46	-- common/autotest_common.sh@10 -- # set +x
00:16:57.731   10:54:46	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:16:57.731   10:54:46	-- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true
00:16:57.731   10:54:46	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:16:57.731    10:54:46	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true
00:16:57.731    10:54:46	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:16:57.731    10:54:46	-- common/autotest_common.sh@708 -- # exec
00:16:57.731    10:54:46	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:16:57.731     10:54:46	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true
00:16:57.731     10:54:46	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:16:57.731     10:54:46	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:16:57.731     10:54:46	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=true
00:16:57.731     10:54:46	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:16:57.731     10:54:46	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:17:04.304     10:54:52	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:04.304     10:54:52	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:04.304     10:54:52	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:04.304  [2024-12-15 10:54:52.120948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:04.304  [2024-12-15 10:54:52.121069] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:04.304  [2024-12-15 10:54:52.121095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:04.304  [2024-12-15 10:54:52.121112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:04.304  [2024-12-15 10:54:52.121137] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:04.304  [2024-12-15 10:54:52.121150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:04.304  [2024-12-15 10:54:52.121163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:04.304  [2024-12-15 10:54:52.121178] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:04.304  [2024-12-15 10:54:52.121190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:04.304  [2024-12-15 10:54:52.121203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:04.304  [2024-12-15 10:54:52.121218] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:04.304  [2024-12-15 10:54:52.121230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:04.304  [2024-12-15 10:54:52.121243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:04.304     10:54:52	-- nvme/sw_hotplug.sh@38 -- # true
00:17:04.304     10:54:52	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:17:09.588      10:54:58	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:17:09.588      10:54:58	-- nvme/sw_hotplug.sh@40 -- # jq length
00:17:09.588      10:54:58	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:09.588      10:54:58	-- common/autotest_common.sh@10 -- # set +x
00:17:09.588      10:54:58	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:09.588     10:54:58	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:17:09.588     10:54:58	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:09.588     10:54:58	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:09.588     10:54:58	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:09.588     10:54:58	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:12.880     10:55:01	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:12.880     10:55:01	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:12.880     10:55:01	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@56 -- # true
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:17:19.453      10:55:07	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:17:19.453      10:55:07	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:17:19.453      10:55:07	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:19.453      10:55:07	-- common/autotest_common.sh@10 -- # set +x
00:17:19.453      10:55:07	-- nvme/sw_hotplug.sh@58 -- # sort
00:17:19.453      10:55:07	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:19.453  [2024-12-15 10:55:07.735568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:19.453  [2024-12-15 10:55:07.735678] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:19.453  [2024-12-15 10:55:07.735701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:19.453  [2024-12-15 10:55:07.735717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:19.453  [2024-12-15 10:55:07.735740] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:19.453  [2024-12-15 10:55:07.735758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:19.453  [2024-12-15 10:55:07.735771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:19.453  [2024-12-15 10:55:07.735785] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:19.453  [2024-12-15 10:55:07.735798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:19.453  [2024-12-15 10:55:07.735811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:19.453  [2024-12-15 10:55:07.735825] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:19.453  [2024-12-15 10:55:07.735837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:19.453  [2024-12-15 10:55:07.735850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@38 -- # true
00:17:19.453     10:55:07	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:17:24.730      10:55:13	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:17:24.730      10:55:13	-- nvme/sw_hotplug.sh@40 -- # jq length
00:17:24.730      10:55:13	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:24.730      10:55:13	-- common/autotest_common.sh@10 -- # set +x
00:17:24.990      10:55:13	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:24.990     10:55:13	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:17:24.990     10:55:13	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:24.990     10:55:13	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:24.990     10:55:13	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:24.990     10:55:13	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:28.280     10:55:17	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:28.280     10:55:17	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:28.280     10:55:17	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@56 -- # true
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:17:34.854      10:55:23	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:17:34.854      10:55:23	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:17:34.854      10:55:23	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:34.854      10:55:23	-- common/autotest_common.sh@10 -- # set +x
00:17:34.854      10:55:23	-- nvme/sw_hotplug.sh@58 -- # sort
00:17:34.854      10:55:23	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:34.854  [2024-12-15 10:55:23.351610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:34.854  [2024-12-15 10:55:23.351718] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:34.854  [2024-12-15 10:55:23.351742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:34.854  [2024-12-15 10:55:23.351759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:34.854  [2024-12-15 10:55:23.351778] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:34.854  [2024-12-15 10:55:23.351791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:34.854  [2024-12-15 10:55:23.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:34.854  [2024-12-15 10:55:23.351820] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:34.854  [2024-12-15 10:55:23.351837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:34.854  [2024-12-15 10:55:23.351850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:34.854  [2024-12-15 10:55:23.351865] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:34.854  [2024-12-15 10:55:23.351876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:34.854  [2024-12-15 10:55:23.351889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@38 -- # true
00:17:34.854     10:55:23	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:17:41.615      10:55:29	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:17:41.615      10:55:29	-- nvme/sw_hotplug.sh@40 -- # jq length
00:17:41.615      10:55:29	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:41.615      10:55:29	-- common/autotest_common.sh@10 -- # set +x
00:17:41.615      10:55:29	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:41.615     10:55:29	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:17:41.615     10:55:29	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:17:41.615     10:55:29	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:17:41.615     10:55:29	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:17:41.615     10:55:29	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:17:44.152     10:55:32	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:17:44.152     10:55:32	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:17:44.152     10:55:32	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@56 -- # true
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:17:50.727      10:55:38	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:17:50.727      10:55:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.727      10:55:38	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:17:50.727      10:55:38	-- common/autotest_common.sh@10 -- # set +x
00:17:50.727      10:55:38	-- nvme/sw_hotplug.sh@58 -- # sort
00:17:50.727      10:55:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:50.727    10:55:38	-- common/autotest_common.sh@716 -- # time=52.86
00:17:50.727    10:55:38	-- common/autotest_common.sh@718 -- # echo 52.86
00:17:50.727   10:55:38	-- nvme/sw_hotplug.sh@16 -- # helper_time=52.86
00:17:50.727   10:55:38	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.86 1
00:17:50.727  remove_attach_helper took 52.86s to complete (handling 1 nvme drive(s)) 10:55:38	-- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:17:50.727   10:55:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.727   10:55:38	-- common/autotest_common.sh@10 -- # set +x
00:17:50.727   10:55:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:50.727   10:55:38	-- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:17:50.727   10:55:38	-- common/autotest_common.sh@561 -- # xtrace_disable
00:17:50.727   10:55:38	-- common/autotest_common.sh@10 -- # set +x
00:17:50.727   10:55:38	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:17:50.727   10:55:38	-- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true
00:17:50.727   10:55:38	-- nvme/sw_hotplug.sh@14 -- # local helper_time=0
00:17:50.727    10:55:38	-- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true
00:17:50.727    10:55:38	-- common/autotest_common.sh@708 -- # [[ -t 0 ]]
00:17:50.727    10:55:38	-- common/autotest_common.sh@708 -- # exec
00:17:50.727    10:55:38	-- common/autotest_common.sh@710 -- # local time=0 TIMEFORMAT=%2R
00:17:50.727     10:55:38	-- common/autotest_common.sh@716 -- # remove_attach_helper 3 6 true
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@24 -- # local use_bdev=true
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@25 -- # local dev bdfs
00:17:50.727     10:55:38	-- nvme/sw_hotplug.sh@31 -- # sleep 6
00:17:56.003     10:55:44	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:17:56.003     10:55:44	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:17:56.003     10:55:44	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:17:56.264  [2024-12-15 10:55:45.068991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:17:56.264  [2024-12-15 10:55:45.069106] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:56.264  [2024-12-15 10:55:45.069129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:56.264  [2024-12-15 10:55:45.069145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:56.264  [2024-12-15 10:55:45.069169] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:56.264  [2024-12-15 10:55:45.069182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:56.264  [2024-12-15 10:55:45.069195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:56.264  [2024-12-15 10:55:45.069209] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:56.264  [2024-12-15 10:55:45.069222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:56.264  [2024-12-15 10:55:45.069234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:56.264  [2024-12-15 10:55:45.069249] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:56.264  [2024-12-15 10:55:45.069260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:56.264  [2024-12-15 10:55:45.069273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:56.264     10:55:45	-- nvme/sw_hotplug.sh@38 -- # true
00:17:56.264     10:55:45	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:02.835      10:55:51	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:02.835      10:55:51	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:02.835      10:55:51	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:02.835      10:55:51	-- common/autotest_common.sh@10 -- # set +x
00:18:02.835      10:55:51	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:02.835     10:55:51	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:02.835     10:55:51	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:02.835     10:55:51	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:02.835     10:55:51	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:02.835     10:55:51	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:06.128     10:55:54	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:06.128     10:55:54	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:06.128     10:55:54	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@56 -- # true
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:12.701      10:56:00	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:12.701      10:56:00	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:12.701      10:56:00	-- common/autotest_common.sh@10 -- # set +x
00:18:12.701      10:56:00	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:12.701      10:56:00	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:12.701      10:56:00	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:18:12.701  [2024-12-15 10:56:00.679536] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:18:12.701  [2024-12-15 10:56:00.679650] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:12.701  [2024-12-15 10:56:00.679674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:12.701  [2024-12-15 10:56:00.679691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:12.701  [2024-12-15 10:56:00.679721] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:12.701  [2024-12-15 10:56:00.679734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:12.701  [2024-12-15 10:56:00.679747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:12.701  [2024-12-15 10:56:00.679761] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:12.701  [2024-12-15 10:56:00.679773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:12.701  [2024-12-15 10:56:00.679786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:12.701  [2024-12-15 10:56:00.679800] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:12.701  [2024-12-15 10:56:00.679812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:12.701  [2024-12-15 10:56:00.679825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@38 -- # true
00:18:12.701     10:56:00	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:17.977      10:56:06	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:17.977      10:56:06	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:17.977      10:56:06	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:17.977      10:56:06	-- common/autotest_common.sh@10 -- # set +x
00:18:17.977      10:56:06	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:17.977     10:56:06	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:17.977     10:56:06	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:17.977     10:56:06	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:17.977     10:56:06	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:17.977     10:56:06	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:21.269     10:56:10	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:21.269     10:56:10	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:21.269     10:56:10	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@56 -- # true
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:27.843      10:56:16	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:27.843      10:56:16	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:27.843      10:56:16	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:27.843      10:56:16	-- common/autotest_common.sh@10 -- # set +x
00:18:27.843      10:56:16	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:27.843      10:56:16	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}"
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@35 -- # echo 1
00:18:27.843  [2024-12-15 10:56:16.287497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [0000:5e:00.0] in failed state.
00:18:27.843  [2024-12-15 10:56:16.287600] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:27.843  [2024-12-15 10:56:16.287627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:27.843  [2024-12-15 10:56:16.287644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:27.843  [2024-12-15 10:56:16.287665] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:27.843  [2024-12-15 10:56:16.287677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:27.843  [2024-12-15 10:56:16.287691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:27.843  [2024-12-15 10:56:16.287711] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:27.843  [2024-12-15 10:56:16.287724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:27.843  [2024-12-15 10:56:16.287737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:27.843  [2024-12-15 10:56:16.287751] nvme_pcie_common.c: 742:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:18:27.843  [2024-12-15 10:56:16.287763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:27.843  [2024-12-15 10:56:16.287777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@38 -- # true
00:18:27.843     10:56:16	-- nvme/sw_hotplug.sh@40 -- # sleep 6
00:18:34.417      10:56:22	-- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs
00:18:34.417      10:56:22	-- nvme/sw_hotplug.sh@40 -- # jq length
00:18:34.417      10:56:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:34.417      10:56:22	-- common/autotest_common.sh@10 -- # set +x
00:18:34.418      10:56:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:34.418     10:56:22	-- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 ))
00:18:34.418     10:56:22	-- nvme/sw_hotplug.sh@44 -- # echo 1
00:18:34.418     10:56:22	-- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}"
00:18:34.418     10:56:22	-- nvme/sw_hotplug.sh@47 -- # echo vfio-pci
00:18:34.418     10:56:22	-- nvme/sw_hotplug.sh@48 -- # echo 0000:5e:00.0
00:18:36.997     10:56:25	-- nvme/sw_hotplug.sh@49 -- # echo 0000:5e:00.0
00:18:36.997     10:56:25	-- nvme/sw_hotplug.sh@50 -- # echo ''
00:18:36.997     10:56:25	-- nvme/sw_hotplug.sh@54 -- # sleep 6
00:18:43.570     10:56:31	-- nvme/sw_hotplug.sh@56 -- # true
00:18:43.570     10:56:31	-- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort))
00:18:43.570      10:56:31	-- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs
00:18:43.570      10:56:31	-- common/autotest_common.sh@561 -- # xtrace_disable
00:18:43.570      10:56:31	-- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address'
00:18:43.570      10:56:31	-- common/autotest_common.sh@10 -- # set +x
00:18:43.570      10:56:31	-- nvme/sw_hotplug.sh@58 -- # sort
00:18:43.570      10:56:31	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:18:43.570     10:56:31	-- nvme/sw_hotplug.sh@59 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]]
00:18:43.570     10:56:31	-- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- ))
00:18:43.570    10:56:31	-- common/autotest_common.sh@716 -- # time=52.88
00:18:43.570    10:56:31	-- common/autotest_common.sh@718 -- # echo 52.88
00:18:43.570   10:56:31	-- nvme/sw_hotplug.sh@16 -- # helper_time=52.88
00:18:43.570   10:56:31	-- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 52.88 1
00:18:43.570  remove_attach_helper took 52.88s to complete (handling 1 nvme drive(s)) 10:56:31	-- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT
00:18:43.570   10:56:31	-- nvme/sw_hotplug.sh@118 -- # killprocess 2181130
00:18:43.570   10:56:31	-- common/autotest_common.sh@936 -- # '[' -z 2181130 ']'
00:18:43.570   10:56:31	-- common/autotest_common.sh@940 -- # kill -0 2181130
00:18:43.570    10:56:31	-- common/autotest_common.sh@941 -- # uname
00:18:43.570   10:56:31	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:18:43.570    10:56:31	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2181130
00:18:43.570   10:56:31	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:18:43.570   10:56:31	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:18:43.570   10:56:31	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2181130'
00:18:43.570  killing process with pid 2181130
00:18:43.570   10:56:31	-- common/autotest_common.sh@955 -- # kill 2181130
00:18:43.570   10:56:31	-- common/autotest_common.sh@960 -- # wait 2181130
00:18:47.765  
00:18:47.765  real	2m40.652s
00:18:47.765  user	1m46.545s
00:18:47.765  sys	0m41.260s
00:18:47.765   10:56:36	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:47.765   10:56:36	-- common/autotest_common.sh@10 -- # set +x
00:18:47.765  ************************************
00:18:47.765  END TEST sw_hotplug
00:18:47.765  ************************************
00:18:47.765   10:56:36	-- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]]
00:18:47.765   10:56:36	-- spdk/autotest.sh@251 -- # '[' 1 -eq 1 ']'
00:18:47.765   10:56:36	-- spdk/autotest.sh@252 -- # run_test ioat /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh
00:18:47.765   10:56:36	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:18:47.765   10:56:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:47.766   10:56:36	-- common/autotest_common.sh@10 -- # set +x
00:18:47.766  ************************************
00:18:47.766  START TEST ioat
00:18:47.766  ************************************
00:18:47.766   10:56:36	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat/ioat.sh
00:18:47.766  * Looking for test storage...
00:18:47.766  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ioat
00:18:47.766    10:56:36	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:18:47.766     10:56:36	-- common/autotest_common.sh@1690 -- # lcov --version
00:18:47.766     10:56:36	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:18:47.766    10:56:36	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:18:47.766    10:56:36	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:18:47.766    10:56:36	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:18:47.766    10:56:36	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:18:47.766    10:56:36	-- scripts/common.sh@335 -- # IFS=.-:
00:18:47.766    10:56:36	-- scripts/common.sh@335 -- # read -ra ver1
00:18:47.766    10:56:36	-- scripts/common.sh@336 -- # IFS=.-:
00:18:47.766    10:56:36	-- scripts/common.sh@336 -- # read -ra ver2
00:18:47.766    10:56:36	-- scripts/common.sh@337 -- # local 'op=<'
00:18:47.766    10:56:36	-- scripts/common.sh@339 -- # ver1_l=2
00:18:47.766    10:56:36	-- scripts/common.sh@340 -- # ver2_l=1
00:18:47.766    10:56:36	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:18:47.766    10:56:36	-- scripts/common.sh@343 -- # case "$op" in
00:18:47.766    10:56:36	-- scripts/common.sh@344 -- # : 1
00:18:47.766    10:56:36	-- scripts/common.sh@363 -- # (( v = 0 ))
00:18:47.766    10:56:36	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:47.766     10:56:36	-- scripts/common.sh@364 -- # decimal 1
00:18:47.766     10:56:36	-- scripts/common.sh@352 -- # local d=1
00:18:47.766     10:56:36	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:47.766     10:56:36	-- scripts/common.sh@354 -- # echo 1
00:18:47.766    10:56:36	-- scripts/common.sh@364 -- # ver1[v]=1
00:18:47.766     10:56:36	-- scripts/common.sh@365 -- # decimal 2
00:18:47.766     10:56:36	-- scripts/common.sh@352 -- # local d=2
00:18:47.766     10:56:36	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:47.766     10:56:36	-- scripts/common.sh@354 -- # echo 2
00:18:47.766    10:56:36	-- scripts/common.sh@365 -- # ver2[v]=2
00:18:47.766    10:56:36	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:18:47.766    10:56:36	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:18:47.766    10:56:36	-- scripts/common.sh@367 -- # return 0
00:18:47.766    10:56:36	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:47.766    10:56:36	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:18:47.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:47.766  		--rc genhtml_branch_coverage=1
00:18:47.766  		--rc genhtml_function_coverage=1
00:18:47.766  		--rc genhtml_legend=1
00:18:47.766  		--rc geninfo_all_blocks=1
00:18:47.766  		--rc geninfo_unexecuted_blocks=1
00:18:47.766  		
00:18:47.766  		'
00:18:47.766    10:56:36	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:18:47.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:47.766  		--rc genhtml_branch_coverage=1
00:18:47.766  		--rc genhtml_function_coverage=1
00:18:47.766  		--rc genhtml_legend=1
00:18:47.766  		--rc geninfo_all_blocks=1
00:18:47.766  		--rc geninfo_unexecuted_blocks=1
00:18:47.766  		
00:18:47.766  		'
00:18:47.766    10:56:36	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:18:47.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:47.766  		--rc genhtml_branch_coverage=1
00:18:47.766  		--rc genhtml_function_coverage=1
00:18:47.766  		--rc genhtml_legend=1
00:18:47.766  		--rc geninfo_all_blocks=1
00:18:47.766  		--rc geninfo_unexecuted_blocks=1
00:18:47.766  		
00:18:47.766  		'
00:18:47.766    10:56:36	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:18:47.766  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:47.766  		--rc genhtml_branch_coverage=1
00:18:47.766  		--rc genhtml_function_coverage=1
00:18:47.766  		--rc genhtml_legend=1
00:18:47.766  		--rc geninfo_all_blocks=1
00:18:47.766  		--rc geninfo_unexecuted_blocks=1
00:18:47.766  		
00:18:47.766  		'
00:18:47.766   10:56:36	-- ioat/ioat.sh@10 -- # run_test ioat_perf /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1
00:18:47.766   10:56:36	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:18:47.766   10:56:36	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:47.766   10:56:36	-- common/autotest_common.sh@10 -- # set +x
00:18:47.766  ************************************
00:18:47.766  START TEST ioat_perf
00:18:47.766  ************************************
00:18:47.766   10:56:36	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/ioat_perf -t 1
00:18:47.766  EAL: No free 2048 kB hugepages reported on node 1
00:18:49.147  [2024-12-15 10:56:38.027891] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.027957] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.027970] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.027981] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.027992] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028003] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028013] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028024] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028034] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028044] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028055] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028066] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028076] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028087] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028097] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown!
00:18:49.147  [2024-12-15 10:56:38.028107] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown!
00:18:49.147   Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021
00:18:49.147   Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021
00:18:49.147  User configuration:
00:18:49.147  Number of channels:    1
00:18:49.147  Transfer size:  4096 bytes
00:18:49.147  Queue depth:    256
00:18:49.147  Run time:       1 seconds
00:18:49.147  Core mask:      0x1
00:18:49.147  Verify:         No
00:18:49.147  
00:18:49.147  Associating ioat_channel 0 with core 0
00:18:49.147  Starting thread on core 0
00:18:49.147  Channel_ID     Core     Transfers     Bandwidth     Failed
00:18:49.147  -----------------------------------------------------------
00:18:49.147           0         0      687488/s    2685 MiB/s          0
00:18:49.147  ===========================================================
00:18:49.147  Total:                    687488/s    2685 MiB/s          0
00:18:49.147  
00:18:49.147  real	0m1.632s
00:18:49.147  user	0m1.288s
00:18:49.147  sys	0m0.152s
00:18:49.147   10:56:38	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:49.147   10:56:38	-- common/autotest_common.sh@10 -- # set +x
00:18:49.147  ************************************
00:18:49.147  END TEST ioat_perf
00:18:49.147  ************************************
00:18:49.147   10:56:38	-- ioat/ioat.sh@12 -- # run_test ioat_verify /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1
00:18:49.147   10:56:38	-- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']'
00:18:49.147   10:56:38	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:49.147   10:56:38	-- common/autotest_common.sh@10 -- # set +x
00:18:49.147  ************************************
00:18:49.147  START TEST ioat_verify
00:18:49.147  ************************************
00:18:49.147   10:56:38	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/verify -t 1
00:18:49.147  EAL: No free 2048 kB hugepages reported on node 1
00:18:51.054  [2024-12-15 10:56:39.770474] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.0 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770568] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.1 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770582] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.2 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770593] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.3 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770603] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.4 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770614] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.5 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770631] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.6 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770641] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:00:04.7 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770652] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.0 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770662] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.1 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770673] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.2 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770683] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.3 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770693] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.4 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770704] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.5 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770714] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.6 is still attached at shutdown!
00:18:51.054  [2024-12-15 10:56:39.770725] pci.c: 350:pci_env_fini: *ERROR*: Device 0000:80:04.7 is still attached at shutdown!
00:18:51.054   Found matching device at 0000:00:04.0 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.1 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.2 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.3 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.4 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.5 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.6 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:00:04.7 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.0 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.1 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.2 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.3 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.4 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.5 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.6 vendor:0x8086 device:0x2021
00:18:51.054   Found matching device at 0000:80:04.7 vendor:0x8086 device:0x2021
00:18:51.054  User configuration:
00:18:51.054  Run time:       1 seconds
00:18:51.054  Core mask:      0x1
00:18:51.054  Queue depth:    32
00:18:51.054  lcore = 0, copy success = 542, copy failed = 0, fill success = 542, fill failed = 0
00:18:51.054  
00:18:51.054  real	0m1.698s
00:18:51.054  user	0m1.359s
00:18:51.054  sys	0m0.144s
00:18:51.054   10:56:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:51.054   10:56:39	-- common/autotest_common.sh@10 -- # set +x
00:18:51.054  ************************************
00:18:51.054  END TEST ioat_verify
00:18:51.054  ************************************
00:18:51.054  
00:18:51.054  real	0m3.587s
00:18:51.054  user	0m2.771s
00:18:51.054  sys	0m0.462s
00:18:51.054   10:56:39	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:18:51.054   10:56:39	-- common/autotest_common.sh@10 -- # set +x
00:18:51.054  ************************************
00:18:51.054  END TEST ioat
00:18:51.054  ************************************
00:18:51.054   10:56:39	-- spdk/autotest.sh@255 -- # timing_exit lib
00:18:51.054   10:56:39	-- common/autotest_common.sh@728 -- # xtrace_disable
00:18:51.054   10:56:39	-- common/autotest_common.sh@10 -- # set +x
00:18:51.054   10:56:39	-- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@325 -- # '[' 1 -eq 1 ']'
00:18:51.054   10:56:39	-- spdk/autotest.sh@326 -- # run_test ocf /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh
00:18:51.054   10:56:39	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:18:51.054   10:56:39	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:51.054   10:56:39	-- common/autotest_common.sh@10 -- # set +x
00:18:51.054  ************************************
00:18:51.054  START TEST ocf
00:18:51.054  ************************************
00:18:51.054   10:56:39	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/ocf.sh
00:18:51.054  * Looking for test storage...
00:18:51.054  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf
00:18:51.054    10:56:39	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:18:51.054     10:56:39	-- common/autotest_common.sh@1690 -- # lcov --version
00:18:51.054     10:56:39	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:18:51.314    10:56:40	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:18:51.314    10:56:40	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:18:51.314    10:56:40	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:18:51.314    10:56:40	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:18:51.314    10:56:40	-- scripts/common.sh@335 -- # IFS=.-:
00:18:51.314    10:56:40	-- scripts/common.sh@335 -- # read -ra ver1
00:18:51.314    10:56:40	-- scripts/common.sh@336 -- # IFS=.-:
00:18:51.314    10:56:40	-- scripts/common.sh@336 -- # read -ra ver2
00:18:51.314    10:56:40	-- scripts/common.sh@337 -- # local 'op=<'
00:18:51.314    10:56:40	-- scripts/common.sh@339 -- # ver1_l=2
00:18:51.314    10:56:40	-- scripts/common.sh@340 -- # ver2_l=1
00:18:51.314    10:56:40	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:18:51.314    10:56:40	-- scripts/common.sh@343 -- # case "$op" in
00:18:51.314    10:56:40	-- scripts/common.sh@344 -- # : 1
00:18:51.314    10:56:40	-- scripts/common.sh@363 -- # (( v = 0 ))
00:18:51.314    10:56:40	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:51.314     10:56:40	-- scripts/common.sh@364 -- # decimal 1
00:18:51.314     10:56:40	-- scripts/common.sh@352 -- # local d=1
00:18:51.314     10:56:40	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:51.314     10:56:40	-- scripts/common.sh@354 -- # echo 1
00:18:51.314    10:56:40	-- scripts/common.sh@364 -- # ver1[v]=1
00:18:51.314     10:56:40	-- scripts/common.sh@365 -- # decimal 2
00:18:51.314     10:56:40	-- scripts/common.sh@352 -- # local d=2
00:18:51.314     10:56:40	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:51.314     10:56:40	-- scripts/common.sh@354 -- # echo 2
00:18:51.314    10:56:40	-- scripts/common.sh@365 -- # ver2[v]=2
00:18:51.314    10:56:40	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:18:51.314    10:56:40	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:18:51.314    10:56:40	-- scripts/common.sh@367 -- # return 0
00:18:51.314    10:56:40	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:51.314    10:56:40	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:18:51.314  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.314  		--rc genhtml_branch_coverage=1
00:18:51.314  		--rc genhtml_function_coverage=1
00:18:51.314  		--rc genhtml_legend=1
00:18:51.314  		--rc geninfo_all_blocks=1
00:18:51.314  		--rc geninfo_unexecuted_blocks=1
00:18:51.314  		
00:18:51.314  		'
00:18:51.314    10:56:40	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:18:51.314  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.314  		--rc genhtml_branch_coverage=1
00:18:51.314  		--rc genhtml_function_coverage=1
00:18:51.314  		--rc genhtml_legend=1
00:18:51.314  		--rc geninfo_all_blocks=1
00:18:51.314  		--rc geninfo_unexecuted_blocks=1
00:18:51.314  		
00:18:51.314  		'
00:18:51.314    10:56:40	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:18:51.314  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.314  		--rc genhtml_branch_coverage=1
00:18:51.314  		--rc genhtml_function_coverage=1
00:18:51.314  		--rc genhtml_legend=1
00:18:51.314  		--rc geninfo_all_blocks=1
00:18:51.314  		--rc geninfo_unexecuted_blocks=1
00:18:51.314  		
00:18:51.314  		'
00:18:51.314    10:56:40	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:18:51.314  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.314  		--rc genhtml_branch_coverage=1
00:18:51.314  		--rc genhtml_function_coverage=1
00:18:51.314  		--rc genhtml_legend=1
00:18:51.314  		--rc geninfo_all_blocks=1
00:18:51.314  		--rc geninfo_unexecuted_blocks=1
00:18:51.314  		
00:18:51.314  		'
00:18:51.314   10:56:40	-- ocf/ocf.sh@11 -- # run_test ocf_fio_modes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh
00:18:51.314   10:56:40	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:18:51.314   10:56:40	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:18:51.314   10:56:40	-- common/autotest_common.sh@10 -- # set +x
00:18:51.314  ************************************
00:18:51.314  START TEST ocf_fio_modes
00:18:51.314  ************************************
00:18:51.314   10:56:40	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/fio-modes.sh
00:18:51.314     10:56:40	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:18:51.314      10:56:40	-- common/autotest_common.sh@1690 -- # lcov --version
00:18:51.314      10:56:40	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:18:51.314     10:56:40	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:18:51.314     10:56:40	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:18:51.314     10:56:40	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:18:51.314     10:56:40	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:18:51.314     10:56:40	-- scripts/common.sh@335 -- # IFS=.-:
00:18:51.315     10:56:40	-- scripts/common.sh@335 -- # read -ra ver1
00:18:51.315     10:56:40	-- scripts/common.sh@336 -- # IFS=.-:
00:18:51.315     10:56:40	-- scripts/common.sh@336 -- # read -ra ver2
00:18:51.315     10:56:40	-- scripts/common.sh@337 -- # local 'op=<'
00:18:51.315     10:56:40	-- scripts/common.sh@339 -- # ver1_l=2
00:18:51.315     10:56:40	-- scripts/common.sh@340 -- # ver2_l=1
00:18:51.315     10:56:40	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:18:51.315     10:56:40	-- scripts/common.sh@343 -- # case "$op" in
00:18:51.315     10:56:40	-- scripts/common.sh@344 -- # : 1
00:18:51.315     10:56:40	-- scripts/common.sh@363 -- # (( v = 0 ))
00:18:51.315     10:56:40	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:51.315      10:56:40	-- scripts/common.sh@364 -- # decimal 1
00:18:51.315      10:56:40	-- scripts/common.sh@352 -- # local d=1
00:18:51.315      10:56:40	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:51.315      10:56:40	-- scripts/common.sh@354 -- # echo 1
00:18:51.315     10:56:40	-- scripts/common.sh@364 -- # ver1[v]=1
00:18:51.315      10:56:40	-- scripts/common.sh@365 -- # decimal 2
00:18:51.315      10:56:40	-- scripts/common.sh@352 -- # local d=2
00:18:51.315      10:56:40	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:51.315      10:56:40	-- scripts/common.sh@354 -- # echo 2
00:18:51.315     10:56:40	-- scripts/common.sh@365 -- # ver2[v]=2
00:18:51.315     10:56:40	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:18:51.315     10:56:40	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:18:51.315     10:56:40	-- scripts/common.sh@367 -- # return 0
00:18:51.315     10:56:40	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:51.315     10:56:40	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:18:51.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.315  		--rc genhtml_branch_coverage=1
00:18:51.315  		--rc genhtml_function_coverage=1
00:18:51.315  		--rc genhtml_legend=1
00:18:51.315  		--rc geninfo_all_blocks=1
00:18:51.315  		--rc geninfo_unexecuted_blocks=1
00:18:51.315  		
00:18:51.315  		'
00:18:51.315     10:56:40	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:18:51.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.315  		--rc genhtml_branch_coverage=1
00:18:51.315  		--rc genhtml_function_coverage=1
00:18:51.315  		--rc genhtml_legend=1
00:18:51.315  		--rc geninfo_all_blocks=1
00:18:51.315  		--rc geninfo_unexecuted_blocks=1
00:18:51.315  		
00:18:51.315  		'
00:18:51.315     10:56:40	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:18:51.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.315  		--rc genhtml_branch_coverage=1
00:18:51.315  		--rc genhtml_function_coverage=1
00:18:51.315  		--rc genhtml_legend=1
00:18:51.315  		--rc geninfo_all_blocks=1
00:18:51.315  		--rc geninfo_unexecuted_blocks=1
00:18:51.315  		
00:18:51.315  		'
00:18:51.315     10:56:40	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:18:51.315  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:51.315  		--rc genhtml_branch_coverage=1
00:18:51.315  		--rc genhtml_function_coverage=1
00:18:51.315  		--rc genhtml_legend=1
00:18:51.315  		--rc geninfo_all_blocks=1
00:18:51.315  		--rc geninfo_unexecuted_blocks=1
00:18:51.315  		
00:18:51.315  		'
00:18:51.315    10:56:40	-- ocf/common.sh@9 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:18:51.315   10:56:40	-- integrity/fio-modes.sh@20 -- # clear_nvme
00:18:51.315   10:56:40	-- ocf/common.sh@12 -- # mapfile -t bdf
00:18:51.315    10:56:40	-- ocf/common.sh@12 -- # get_first_nvme_bdf
00:18:51.315    10:56:40	-- common/autotest_common.sh@1519 -- # bdfs=()
00:18:51.315    10:56:40	-- common/autotest_common.sh@1519 -- # local bdfs
00:18:51.315    10:56:40	-- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:18:51.315     10:56:40	-- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:18:51.315     10:56:40	-- common/autotest_common.sh@1508 -- # bdfs=()
00:18:51.315     10:56:40	-- common/autotest_common.sh@1508 -- # local bdfs
00:18:51.315     10:56:40	-- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:18:51.315      10:56:40	-- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/gen_nvme.sh
00:18:51.315      10:56:40	-- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr'
00:18:51.575     10:56:40	-- common/autotest_common.sh@1510 -- # (( 1 == 0 ))
00:18:51.575     10:56:40	-- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:5e:00.0
00:18:51.575    10:56:40	-- common/autotest_common.sh@1522 -- # echo 0000:5e:00.0
00:18:51.575   10:56:40	-- ocf/common.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh reset
00:18:54.866  Waiting for block devices as requested
00:18:54.866  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:18:54.866  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:18:54.866  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:18:54.866  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:18:54.866  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:18:54.866  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:18:54.866  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:18:55.126  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:18:55.126  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:18:55.126  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:18:55.385  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:18:55.385  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:18:55.385  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:18:55.645  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:18:55.645  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:18:55.645  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:18:55.904  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:18:55.904    10:56:44	-- ocf/common.sh@17 -- # get_nvme_name_from_bdf 0000:5e:00.0
00:18:55.904    10:56:44	-- common/autotest_common.sh@1476 -- # blkname=()
00:18:55.904     10:56:44	-- common/autotest_common.sh@1478 -- # lsblk -d --output NAME
00:18:55.904     10:56:44	-- common/autotest_common.sh@1478 -- # grep '^nvme'
00:18:55.904    10:56:44	-- common/autotest_common.sh@1478 -- # nvme_devs=nvme0n1
00:18:55.904    10:56:44	-- common/autotest_common.sh@1479 -- # '[' -z nvme0n1 ']'
00:18:55.904    10:56:44	-- common/autotest_common.sh@1482 -- # for dev in $nvme_devs
00:18:55.904     10:56:44	-- common/autotest_common.sh@1483 -- # readlink /sys/block/nvme0n1/device/device
00:18:55.904    10:56:44	-- common/autotest_common.sh@1483 -- # link_name=../../../0000:5e:00.0
00:18:55.904    10:56:44	-- common/autotest_common.sh@1484 -- # '[' -z ../../../0000:5e:00.0 ']'
00:18:55.904     10:56:44	-- common/autotest_common.sh@1487 -- # basename ../../../0000:5e:00.0
00:18:55.904    10:56:44	-- common/autotest_common.sh@1487 -- # bdf=0000:5e:00.0
00:18:55.904    10:56:44	-- common/autotest_common.sh@1488 -- # '[' 0000:5e:00.0 = 0000:5e:00.0 ']'
00:18:55.904    10:56:44	-- common/autotest_common.sh@1489 -- # blkname+=($dev)
00:18:55.904    10:56:44	-- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0n1
00:18:55.904   10:56:44	-- ocf/common.sh@17 -- # name=nvme0n1
00:18:55.904    10:56:44	-- ocf/common.sh@18 -- # lsblk /dev/nvme0n1 --output MOUNTPOINT -n
00:18:55.904    10:56:44	-- ocf/common.sh@18 -- # wc -w
00:18:55.904   10:56:44	-- ocf/common.sh@18 -- # mountpoints=0
00:18:55.904   10:56:44	-- ocf/common.sh@19 -- # '[' 0 '!=' 0 ']'
00:18:55.904   10:56:44	-- ocf/common.sh@22 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1000 oflag=direct
00:18:56.472  1000+0 records in
00:18:56.472  1000+0 records out
00:18:56.472  1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.502431 s, 2.1 GB/s
00:18:56.472   10:56:45	-- ocf/common.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:18:59.907  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:18:59.907  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:19:03.199  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:19:03.199   10:56:51	-- integrity/fio-modes.sh@22 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:19:03.199   10:56:51	-- integrity/fio-modes.sh@25 -- # xtrace_disable
00:19:03.199   10:56:51	-- common/autotest_common.sh@10 -- # set +x
00:19:03.199  {
00:19:03.199    "subsystems": [
00:19:03.199      {
00:19:03.199        "subsystem": "bdev",
00:19:03.199        "config": [
00:19:03.199          {
00:19:03.199            "method": "bdev_nvme_attach_controller",
00:19:03.199            "params": {
00:19:03.199              "trtype": "PCIe",
00:19:03.199              "name": "Nvme0",
00:19:03.199              "traddr": "0000:5e:00.0"
00:19:03.199            }
00:19:03.199          },
00:19:03.199          {
00:19:03.199            "method": "bdev_split_create",
00:19:03.199            "params": {
00:19:03.200              "base_bdev": "Nvme0n1",
00:19:03.200              "split_count": 8,
00:19:03.200              "split_size_mb": 101
00:19:03.200            }
00:19:03.200          },
00:19:03.200          {
00:19:03.200            "method": "bdev_ocf_create",
00:19:03.200            "params": {
00:19:03.200              "name": "PT_Nvme",
00:19:03.200              "mode": "pt",
00:19:03.200              "cache_bdev_name": "Nvme0n1p0",
00:19:03.200              "core_bdev_name": "Nvme0n1p1"
00:19:03.200            }
00:19:03.200          },
00:19:03.200          {
00:19:03.200            "method": "bdev_ocf_create",
00:19:03.200            "params": {
00:19:03.200              "name": "WT_Nvme",
00:19:03.200              "mode": "wt",
00:19:03.200              "cache_bdev_name": "Nvme0n1p2",
00:19:03.200              "core_bdev_name": "Nvme0n1p3"
00:19:03.200            }
00:19:03.200          },
00:19:03.200          {
00:19:03.200            "method": "bdev_ocf_create",
00:19:03.200            "params": {
00:19:03.200              "name": "WB_Nvme0",
00:19:03.200              "mode": "wb",
00:19:03.200              "cache_bdev_name": "Nvme0n1p4",
00:19:03.200              "core_bdev_name": "Nvme0n1p5"
00:19:03.200            }
00:19:03.200          },
00:19:03.200          {
00:19:03.200            "method": "bdev_ocf_create",
00:19:03.200            "params": {
00:19:03.200              "name": "WB_Nvme1",
00:19:03.200              "mode": "wb",
00:19:03.200              "cache_bdev_name": "Nvme0n1p6",
00:19:03.200              "core_bdev_name": "Nvme0n1p7"
00:19:03.200            }
00:19:03.200          },
00:19:03.200          {
00:19:03.200            "method": "bdev_wait_for_examine"
00:19:03.200          }
00:19:03.200        ]
00:19:03.200      }
00:19:03.200    ]
00:19:03.200  }
00:19:03.200   10:56:51	-- integrity/fio-modes.sh@100 -- # fio_verify --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:03.200   10:56:51	-- integrity/fio-modes.sh@12 -- # fio_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:03.200   10:56:51	-- common/autotest_common.sh@1345 -- # fio_plugin /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:03.200   10:56:51	-- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio
00:19:03.200   10:56:51	-- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:19:03.200   10:56:51	-- common/autotest_common.sh@1328 -- # local sanitizers
00:19:03.200   10:56:51	-- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:03.200   10:56:51	-- common/autotest_common.sh@1330 -- # shift
00:19:03.200   10:56:51	-- common/autotest_common.sh@1332 -- # local asan_lib=
00:19:03.200   10:56:51	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # grep libasan
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:19:03.200   10:56:51	-- common/autotest_common.sh@1334 -- # asan_lib=
00:19:03.200   10:56:51	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:19:03.200   10:56:51	-- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}"
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # grep libclang_rt.asan
00:19:03.200    10:56:51	-- common/autotest_common.sh@1334 -- # awk '{print $3}'
00:19:03.200   10:56:51	-- common/autotest_common.sh@1334 -- # asan_lib=
00:19:03.200   10:56:51	-- common/autotest_common.sh@1335 -- # [[ -n '' ]]
00:19:03.200   10:56:51	-- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvme-phy-autotest/spdk/build/fio/spdk_bdev'
00:19:03.200   10:56:51	-- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/test.fio --aux-path=/tmp/ --ioengine=spdk_bdev --filename=PT_Nvme:WT_Nvme:WB_Nvme0:WB_Nvme1 --spdk_json_conf=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf --thread=1
00:19:03.458  randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:03.458  randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:03.458  write: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:03.458  rw: (g=0): rw=rw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:03.458  randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128
00:19:03.458  fio-3.35
00:19:03.458  Starting 5 threads
00:19:03.458  EAL: No free 2048 kB hugepages reported on node 1
00:19:18.345  
00:19:18.345  randwrite: (groupid=0, jobs=5): err= 0: pid=2198879: Sun Dec 15 10:57:06 2024
00:19:18.345    read: IOPS=22.6k, BW=88.1MiB/s (92.4MB/s)(882MiB/10006msec)
00:19:18.345      slat (usec): min=5, max=340, avg=35.23, stdev=25.09
00:19:18.345      clat (usec): min=74, max=25757, avg=7101.60, stdev=3693.85
00:19:18.345       lat (usec): min=114, max=25772, avg=7136.83, stdev=3691.28
00:19:18.345      clat percentiles (usec):
00:19:18.345       |  1.00th=[  429],  5.00th=[  938], 10.00th=[ 1844], 20.00th=[ 3785],
00:19:18.345       | 30.00th=[ 5014], 40.00th=[ 6128], 50.00th=[ 7177], 60.00th=[ 8160],
00:19:18.345       | 70.00th=[ 9110], 80.00th=[10159], 90.00th=[11469], 95.00th=[13042],
00:19:18.345       | 99.00th=[16581], 99.50th=[17433], 99.90th=[19792], 99.95th=[20579],
00:19:18.345       | 99.99th=[25035]
00:19:18.345     bw (  KiB/s): min= 4080, max=35776, per=26.25%, avg=23693.26, stdev=3002.12, samples=78
00:19:18.345     iops        : min= 1020, max= 8944, avg=5923.32, stdev=750.53, samples=78
00:19:18.345    write: IOPS=18.6k, BW=72.5MiB/s (76.0MB/s)(724MiB/9982msec); 0 zone resets
00:19:18.345      slat (usec): min=7, max=135, avg=32.55, stdev=20.71
00:19:18.345      clat (usec): min=48, max=79281, avg=8654.83, stdev=7565.95
00:19:18.345       lat (usec): min=60, max=79304, avg=8687.38, stdev=7569.79
00:19:18.345      clat percentiles (usec):
00:19:18.345       |  1.00th=[   95],  5.00th=[  131], 10.00th=[  190], 20.00th=[  619],
00:19:18.345       | 30.00th=[ 2835], 40.00th=[ 6128], 50.00th=[ 8291], 60.00th=[10159],
00:19:18.345       | 70.00th=[11863], 80.00th=[14222], 90.00th=[17433], 95.00th=[21365],
00:19:18.345       | 99.00th=[33424], 99.50th=[38536], 99.90th=[47973], 99.95th=[52167],
00:19:18.345       | 99.99th=[62653]
00:19:18.345     bw (  KiB/s): min=48536, max=102688, per=99.45%, avg=73844.23, stdev=3639.82, samples=94
00:19:18.345     iops        : min=12134, max=25672, avg=18461.06, stdev=909.95, samples=94
00:19:18.345    lat (usec)   : 50=0.01%, 100=0.66%, 250=6.37%, 500=2.39%, 750=1.96%
00:19:18.345    lat (usec)   : 1000=1.94%
00:19:18.346    lat (msec)   : 2=5.03%, 4=8.30%, 10=43.23%, 20=27.24%, 50=2.85%
00:19:18.346    lat (msec)   : 100=0.03%
00:19:18.346    cpu          : usr=99.53%, sys=0.01%, ctx=222, majf=0, minf=497
00:19:18.346    IO depths    : 1=5.7%, 2=4.9%, 4=5.2%, 8=7.5%, 16=9.9%, 32=18.4%, >=64=48.5%
00:19:18.346       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:19:18.346       complete  : 0=0.0%, 4=97.5%, 8=0.6%, 16=0.4%, 32=0.6%, 64=0.5%, >=64=0.3%
00:19:18.346       issued rwts: total=225768,185303,0,0 short=0,0,0,0 dropped=0,0,0,0
00:19:18.346       latency   : target=0, window=0, percentile=100.00%, depth=128
00:19:18.346  
00:19:18.346  Run status group 0 (all jobs):
00:19:18.346     READ: bw=88.1MiB/s (92.4MB/s), 88.1MiB/s-88.1MiB/s (92.4MB/s-92.4MB/s), io=882MiB (925MB), run=10006-10006msec
00:19:18.346    WRITE: bw=72.5MiB/s (76.0MB/s), 72.5MiB/s-72.5MiB/s (76.0MB/s-76.0MB/s), io=724MiB (759MB), run=9982-9982msec
00:19:23.619   10:57:12	-- integrity/fio-modes.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:19:23.619   10:57:12	-- integrity/fio-modes.sh@103 -- # cleanup
00:19:23.619   10:57:12	-- integrity/fio-modes.sh@16 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/modes.conf
00:19:23.619  
00:19:23.619  real	0m32.094s
00:19:23.619  user	1m9.359s
00:19:23.619  sys	0m6.018s
00:19:23.619   10:57:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:23.619   10:57:12	-- common/autotest_common.sh@10 -- # set +x
00:19:23.619  ************************************
00:19:23.619  END TEST ocf_fio_modes
00:19:23.619  ************************************
00:19:23.619   10:57:12	-- ocf/ocf.sh@12 -- # run_test ocf_bdevperf_iotypes /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh
00:19:23.619   10:57:12	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:23.619   10:57:12	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:23.619   10:57:12	-- common/autotest_common.sh@10 -- # set +x
00:19:23.619  ************************************
00:19:23.619  START TEST ocf_bdevperf_iotypes
00:19:23.619  ************************************
00:19:23.619   10:57:12	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/bdevperf-iotypes.sh
00:19:23.619    10:57:12	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:23.619     10:57:12	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:23.619     10:57:12	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:23.619    10:57:12	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:23.619    10:57:12	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:23.619    10:57:12	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:23.619    10:57:12	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:23.619    10:57:12	-- scripts/common.sh@335 -- # IFS=.-:
00:19:23.619    10:57:12	-- scripts/common.sh@335 -- # read -ra ver1
00:19:23.619    10:57:12	-- scripts/common.sh@336 -- # IFS=.-:
00:19:23.619    10:57:12	-- scripts/common.sh@336 -- # read -ra ver2
00:19:23.619    10:57:12	-- scripts/common.sh@337 -- # local 'op=<'
00:19:23.619    10:57:12	-- scripts/common.sh@339 -- # ver1_l=2
00:19:23.619    10:57:12	-- scripts/common.sh@340 -- # ver2_l=1
00:19:23.619    10:57:12	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:23.619    10:57:12	-- scripts/common.sh@343 -- # case "$op" in
00:19:23.619    10:57:12	-- scripts/common.sh@344 -- # : 1
00:19:23.619    10:57:12	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:23.619    10:57:12	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:23.619     10:57:12	-- scripts/common.sh@364 -- # decimal 1
00:19:23.619     10:57:12	-- scripts/common.sh@352 -- # local d=1
00:19:23.619     10:57:12	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:23.619     10:57:12	-- scripts/common.sh@354 -- # echo 1
00:19:23.619    10:57:12	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:23.619     10:57:12	-- scripts/common.sh@365 -- # decimal 2
00:19:23.620     10:57:12	-- scripts/common.sh@352 -- # local d=2
00:19:23.620     10:57:12	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:23.620     10:57:12	-- scripts/common.sh@354 -- # echo 2
00:19:23.620    10:57:12	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:23.620    10:57:12	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:23.620    10:57:12	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:23.620    10:57:12	-- scripts/common.sh@367 -- # return 0
00:19:23.620    10:57:12	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:23.620    10:57:12	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:23.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:23.620  		--rc genhtml_branch_coverage=1
00:19:23.620  		--rc genhtml_function_coverage=1
00:19:23.620  		--rc genhtml_legend=1
00:19:23.620  		--rc geninfo_all_blocks=1
00:19:23.620  		--rc geninfo_unexecuted_blocks=1
00:19:23.620  		
00:19:23.620  		'
00:19:23.620    10:57:12	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:23.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:23.620  		--rc genhtml_branch_coverage=1
00:19:23.620  		--rc genhtml_function_coverage=1
00:19:23.620  		--rc genhtml_legend=1
00:19:23.620  		--rc geninfo_all_blocks=1
00:19:23.620  		--rc geninfo_unexecuted_blocks=1
00:19:23.620  		
00:19:23.620  		'
00:19:23.620    10:57:12	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:23.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:23.620  		--rc genhtml_branch_coverage=1
00:19:23.620  		--rc genhtml_function_coverage=1
00:19:23.620  		--rc genhtml_legend=1
00:19:23.620  		--rc geninfo_all_blocks=1
00:19:23.620  		--rc geninfo_unexecuted_blocks=1
00:19:23.620  		
00:19:23.620  		'
00:19:23.620    10:57:12	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:23.620  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:23.620  		--rc genhtml_branch_coverage=1
00:19:23.620  		--rc genhtml_function_coverage=1
00:19:23.620  		--rc genhtml_legend=1
00:19:23.620  		--rc geninfo_all_blocks=1
00:19:23.620  		--rc geninfo_unexecuted_blocks=1
00:19:23.620  		
00:19:23.620  		'
00:19:23.620   10:57:12	-- integrity/bdevperf-iotypes.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:19:23.620   10:57:12	-- integrity/bdevperf-iotypes.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf
00:19:23.620   10:57:12	-- integrity/bdevperf-iotypes.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w flush
00:19:23.620    10:57:12	-- integrity/bdevperf-iotypes.sh@13 -- # gen_malloc_ocf_json
00:19:23.620    10:57:12	-- integrity/mallocs.conf@2 -- # local size=300
00:19:23.620    10:57:12	-- integrity/mallocs.conf@3 -- # local block_size=512
00:19:23.620    10:57:12	-- integrity/mallocs.conf@4 -- # local config
00:19:23.620    10:57:12	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:23.620  {
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc$malloc",
00:19:23.620      "num_blocks": $(( (size << 20) / block_size )),
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  }
00:19:23.620  JSON
00:19:23.620  )")
00:19:23.620     10:57:12	-- integrity/mallocs.conf@21 -- # cat
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:23.620  {
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc$malloc",
00:19:23.620      "num_blocks": $(( (size << 20) / block_size )),
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  }
00:19:23.620  JSON
00:19:23.620  )")
00:19:23.620     10:57:12	-- integrity/mallocs.conf@21 -- # cat
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:23.620  {
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc$malloc",
00:19:23.620      "num_blocks": $(( (size << 20) / block_size )),
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  }
00:19:23.620  JSON
00:19:23.620  )")
00:19:23.620     10:57:12	-- integrity/mallocs.conf@21 -- # cat
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:23.620    10:57:12	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:19:23.620    10:57:12	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:19:23.620    10:57:12	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:19:23.620    10:57:12	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:19:23.620    10:57:12	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:19:23.620    10:57:12	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:19:23.620    10:57:12	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:19:23.620    10:57:12	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:19:23.620    10:57:12	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:23.620    10:57:12	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:23.620  {
00:19:23.620    "method": "bdev_ocf_create",
00:19:23.620    "params": {
00:19:23.620      "name": "MalCache$ocf",
00:19:23.620      "mode": "${ocf_mode[ocf]}",
00:19:23.620      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:23.620      "core_bdev_name": "${ocf_core[ocf]}"
00:19:23.620    }
00:19:23.620  }
00:19:23.620  JSON
00:19:23.620  )")
00:19:23.620     10:57:12	-- integrity/mallocs.conf@44 -- # cat
00:19:23.620    10:57:12	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:23.620    10:57:12	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:23.620  {
00:19:23.620    "method": "bdev_ocf_create",
00:19:23.620    "params": {
00:19:23.620      "name": "MalCache$ocf",
00:19:23.620      "mode": "${ocf_mode[ocf]}",
00:19:23.620      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:23.620      "core_bdev_name": "${ocf_core[ocf]}"
00:19:23.620    }
00:19:23.620  }
00:19:23.620  JSON
00:19:23.620  )")
00:19:23.620     10:57:12	-- integrity/mallocs.conf@44 -- # cat
00:19:23.620    10:57:12	-- integrity/mallocs.conf@47 -- # jq .
00:19:23.620     10:57:12	-- integrity/mallocs.conf@47 -- # IFS=,
00:19:23.620     10:57:12	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc0",
00:19:23.620      "num_blocks": 614400,
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  },{
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc1",
00:19:23.620      "num_blocks": 614400,
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  },{
00:19:23.620    "method": "bdev_malloc_create",
00:19:23.620    "params": {
00:19:23.620      "name": "Malloc2",
00:19:23.620      "num_blocks": 614400,
00:19:23.620      "block_size": 512
00:19:23.620    }
00:19:23.620  },{
00:19:23.620    "method": "bdev_ocf_create",
00:19:23.620    "params": {
00:19:23.620      "name": "MalCache1",
00:19:23.620      "mode": "wt",
00:19:23.620      "cache_bdev_name": "Malloc0",
00:19:23.620      "core_bdev_name": "Malloc1"
00:19:23.620    }
00:19:23.620  },{
00:19:23.620    "method": "bdev_ocf_create",
00:19:23.620    "params": {
00:19:23.620      "name": "MalCache2",
00:19:23.620      "mode": "pt",
00:19:23.620      "cache_bdev_name": "Malloc0",
00:19:23.620      "core_bdev_name": "Malloc2"
00:19:23.620    }
00:19:23.620  }'
00:19:23.620  [2024-12-15 10:57:12.476977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:23.620  [2024-12-15 10:57:12.477061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2201558 ]
00:19:23.620  EAL: No free 2048 kB hugepages reported on node 1
00:19:23.620  [2024-12-15 10:57:12.571924] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:23.880  [2024-12-15 10:57:12.666547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:23.880  [2024-12-15 10:57:12.871023] 'OCF_Core' volume operations registered
00:19:23.880  [2024-12-15 10:57:12.874492] 'OCF_Cache' volume operations registered
00:19:23.880  [2024-12-15 10:57:12.878428] 'OCF Composite' volume operations registered
00:19:23.880  [2024-12-15 10:57:12.881917] 'SPDK_block_device' volume operations registered
00:19:24.140  [2024-12-15 10:57:13.127996] Inserting cache MalCache1
00:19:24.140  [2024-12-15 10:57:13.128473] MalCache1: Metadata initialized
00:19:24.140  [2024-12-15 10:57:13.128924] MalCache1: Successfully added
00:19:24.140  [2024-12-15 10:57:13.128940] MalCache1: Cache mode : wt
00:19:24.140  [2024-12-15 10:57:13.139886] MalCache1: Super block config offset : 0 kiB
00:19:24.140  [2024-12-15 10:57:13.139908] MalCache1: Super block config size : 2200 B
00:19:24.140  [2024-12-15 10:57:13.139915] MalCache1: Super block runtime offset : 128 kiB
00:19:24.140  [2024-12-15 10:57:13.139921] MalCache1: Super block runtime size : 4 B
00:19:24.140  [2024-12-15 10:57:13.139928] MalCache1: Reserved offset : 256 kiB
00:19:24.140  [2024-12-15 10:57:13.139934] MalCache1: Reserved size : 128 kiB
00:19:24.140  [2024-12-15 10:57:13.139940] MalCache1: Part config offset : 384 kiB
00:19:24.140  [2024-12-15 10:57:13.139947] MalCache1: Part config size : 48 kiB
00:19:24.140  [2024-12-15 10:57:13.139953] MalCache1: Part runtime offset : 640 kiB
00:19:24.140  [2024-12-15 10:57:13.139959] MalCache1: Part runtime size : 72 kiB
00:19:24.140  [2024-12-15 10:57:13.139966] MalCache1: Core config offset : 768 kiB
00:19:24.140  [2024-12-15 10:57:13.139972] MalCache1: Core config size : 512 kiB
00:19:24.140  [2024-12-15 10:57:13.139978] MalCache1: Core runtime offset : 1792 kiB
00:19:24.140  [2024-12-15 10:57:13.139985] MalCache1: Core runtime size : 1172 kiB
00:19:24.140  [2024-12-15 10:57:13.139991] MalCache1: Core UUID offset : 3072 kiB
00:19:24.140  [2024-12-15 10:57:13.139997] MalCache1: Core UUID size : 16384 kiB
00:19:24.140  [2024-12-15 10:57:13.140004] MalCache1: Cleaning offset : 35840 kiB
00:19:24.140  [2024-12-15 10:57:13.140010] MalCache1: Cleaning size : 788 kiB
00:19:24.140  [2024-12-15 10:57:13.140016] MalCache1: LRU list offset : 36736 kiB
00:19:24.140  [2024-12-15 10:57:13.140023] MalCache1: LRU list size : 592 kiB
00:19:24.140  [2024-12-15 10:57:13.140029] MalCache1: Collision offset : 37376 kiB
00:19:24.140  [2024-12-15 10:57:13.140035] MalCache1: Collision size : 788 kiB
00:19:24.140  [2024-12-15 10:57:13.140041] MalCache1: List info offset : 38272 kiB
00:19:24.140  [2024-12-15 10:57:13.140048] MalCache1: List info size : 592 kiB
00:19:24.140  [2024-12-15 10:57:13.140054] MalCache1: Hash offset : 38912 kiB
00:19:24.140  [2024-12-15 10:57:13.140060] MalCache1: Hash size : 68 kiB
00:19:24.140  [2024-12-15 10:57:13.140067] MalCache1: Cache line size: 4 kiB
00:19:24.140  [2024-12-15 10:57:13.140076] MalCache1: Metadata capacity: 20 MiB
00:19:24.140  [2024-12-15 10:57:13.150529] MalCache1: Policy 'always' initialized successfully
00:19:24.399  [2024-12-15 10:57:13.363340] MalCache1: Done saving cache state!
00:19:24.399  [2024-12-15 10:57:13.395357] MalCache1: Cache attached
00:19:24.399  [2024-12-15 10:57:13.395451] MalCache1: Successfully attached
00:19:24.399  [2024-12-15 10:57:13.395738] MalCache1: Inserting core Malloc1
00:19:24.399  [2024-12-15 10:57:13.395764] MalCache1.Malloc1: Seqential cutoff init
00:19:24.659  [2024-12-15 10:57:13.427777] MalCache1.Malloc1: Successfully added
00:19:24.659  [2024-12-15 10:57:13.433536] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:19:24.659  [2024-12-15 10:57:13.433780] MalCache1: Inserting core Malloc2
00:19:24.659  [2024-12-15 10:57:13.433804] MalCache1.Malloc2: Seqential cutoff init
00:19:24.659  [2024-12-15 10:57:13.466023] MalCache1.Malloc2: Successfully added
00:19:24.659  Running I/O for 4 seconds...
00:19:28.860  
00:19:28.860                                                                                                  Latency(us)
00:19:28.860  
[2024-12-15T09:57:17.876Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:28.860  
[2024-12-15T09:57:17.876Z]  Job: MalCache1 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096)
00:19:28.860  	 MalCache1           :       4.00   29933.86     116.93       0.00     0.00    4268.67     730.16    4843.97
00:19:28.860  
[2024-12-15T09:57:17.876Z]  Job: MalCache2 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096)
00:19:28.860  	 MalCache2           :       4.01   29928.23     116.91       0.00     0.00    4267.96     698.10    4616.01
00:19:28.860  
[2024-12-15T09:57:17.876Z]  ===================================================================================================================
00:19:28.860  
[2024-12-15T09:57:17.876Z]  Total                       :              59862.09     233.84       0.00     0.00    4268.31     698.10    4843.97
00:19:28.860  [2024-12-15 10:57:17.504821] MalCache1: Flushing cache
00:19:28.860  [2024-12-15 10:57:17.504851] MalCache1: Flushing cache completed
00:19:28.860  [2024-12-15 10:57:17.505681] MalCache1: Stopping cache
00:19:28.860  [2024-12-15 10:57:17.693148] MalCache1: Done saving cache state!
00:19:28.860  [2024-12-15 10:57:17.709043] Cache MalCache1 successfully stopped
00:19:29.435   10:57:18	-- integrity/bdevperf-iotypes.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w unmap
00:19:29.435    10:57:18	-- integrity/bdevperf-iotypes.sh@14 -- # gen_malloc_ocf_json
00:19:29.435    10:57:18	-- integrity/mallocs.conf@2 -- # local size=300
00:19:29.435    10:57:18	-- integrity/mallocs.conf@3 -- # local block_size=512
00:19:29.435    10:57:18	-- integrity/mallocs.conf@4 -- # local config
00:19:29.435    10:57:18	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:29.435  {
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc$malloc",
00:19:29.435      "num_blocks": $(( (size << 20) / block_size )),
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  }
00:19:29.435  JSON
00:19:29.435  )")
00:19:29.435     10:57:18	-- integrity/mallocs.conf@21 -- # cat
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:29.435  {
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc$malloc",
00:19:29.435      "num_blocks": $(( (size << 20) / block_size )),
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  }
00:19:29.435  JSON
00:19:29.435  )")
00:19:29.435     10:57:18	-- integrity/mallocs.conf@21 -- # cat
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:29.435  {
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc$malloc",
00:19:29.435      "num_blocks": $(( (size << 20) / block_size )),
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  }
00:19:29.435  JSON
00:19:29.435  )")
00:19:29.435     10:57:18	-- integrity/mallocs.conf@21 -- # cat
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:29.435    10:57:18	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:19:29.435    10:57:18	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:19:29.435    10:57:18	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:19:29.435    10:57:18	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:19:29.435    10:57:18	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:19:29.435    10:57:18	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:19:29.435    10:57:18	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:19:29.435    10:57:18	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:19:29.435    10:57:18	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:29.435    10:57:18	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:29.435  {
00:19:29.435    "method": "bdev_ocf_create",
00:19:29.435    "params": {
00:19:29.435      "name": "MalCache$ocf",
00:19:29.435      "mode": "${ocf_mode[ocf]}",
00:19:29.435      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:29.435      "core_bdev_name": "${ocf_core[ocf]}"
00:19:29.435    }
00:19:29.435  }
00:19:29.435  JSON
00:19:29.435  )")
00:19:29.435     10:57:18	-- integrity/mallocs.conf@44 -- # cat
00:19:29.435    10:57:18	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:29.435    10:57:18	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:29.435  {
00:19:29.435    "method": "bdev_ocf_create",
00:19:29.435    "params": {
00:19:29.435      "name": "MalCache$ocf",
00:19:29.435      "mode": "${ocf_mode[ocf]}",
00:19:29.435      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:29.435      "core_bdev_name": "${ocf_core[ocf]}"
00:19:29.435    }
00:19:29.435  }
00:19:29.435  JSON
00:19:29.435  )")
00:19:29.435     10:57:18	-- integrity/mallocs.conf@44 -- # cat
00:19:29.435    10:57:18	-- integrity/mallocs.conf@47 -- # jq .
00:19:29.435     10:57:18	-- integrity/mallocs.conf@47 -- # IFS=,
00:19:29.435     10:57:18	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc0",
00:19:29.435      "num_blocks": 614400,
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  },{
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc1",
00:19:29.435      "num_blocks": 614400,
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  },{
00:19:29.435    "method": "bdev_malloc_create",
00:19:29.435    "params": {
00:19:29.435      "name": "Malloc2",
00:19:29.435      "num_blocks": 614400,
00:19:29.435      "block_size": 512
00:19:29.435    }
00:19:29.435  },{
00:19:29.435    "method": "bdev_ocf_create",
00:19:29.435    "params": {
00:19:29.435      "name": "MalCache1",
00:19:29.435      "mode": "wt",
00:19:29.435      "cache_bdev_name": "Malloc0",
00:19:29.435      "core_bdev_name": "Malloc1"
00:19:29.435    }
00:19:29.435  },{
00:19:29.435    "method": "bdev_ocf_create",
00:19:29.435    "params": {
00:19:29.435      "name": "MalCache2",
00:19:29.435      "mode": "pt",
00:19:29.435      "cache_bdev_name": "Malloc0",
00:19:29.435      "core_bdev_name": "Malloc2"
00:19:29.435    }
00:19:29.435  }'
00:19:29.435  [2024-12-15 10:57:18.407975] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:29.435  [2024-12-15 10:57:18.408055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202288 ]
00:19:29.695  EAL: No free 2048 kB hugepages reported on node 1
00:19:29.695  [2024-12-15 10:57:18.515118] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:29.695  [2024-12-15 10:57:18.617989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:29.955  [2024-12-15 10:57:18.827127] 'OCF_Core' volume operations registered
00:19:29.955  [2024-12-15 10:57:18.830635] 'OCF_Cache' volume operations registered
00:19:29.955  [2024-12-15 10:57:18.834601] 'OCF Composite' volume operations registered
00:19:29.955  [2024-12-15 10:57:18.838126] 'SPDK_block_device' volume operations registered
00:19:30.214  [2024-12-15 10:57:19.070817] Inserting cache MalCache1
00:19:30.214  [2024-12-15 10:57:19.071244] MalCache1: Metadata initialized
00:19:30.214  [2024-12-15 10:57:19.071695] MalCache1: Successfully added
00:19:30.214  [2024-12-15 10:57:19.071711] MalCache1: Cache mode : wt
00:19:30.214  [2024-12-15 10:57:19.081667] MalCache1: Super block config offset : 0 kiB
00:19:30.214  [2024-12-15 10:57:19.081688] MalCache1: Super block config size : 2200 B
00:19:30.214  [2024-12-15 10:57:19.081695] MalCache1: Super block runtime offset : 128 kiB
00:19:30.214  [2024-12-15 10:57:19.081701] MalCache1: Super block runtime size : 4 B
00:19:30.214  [2024-12-15 10:57:19.081708] MalCache1: Reserved offset : 256 kiB
00:19:30.214  [2024-12-15 10:57:19.081714] MalCache1: Reserved size : 128 kiB
00:19:30.214  [2024-12-15 10:57:19.081721] MalCache1: Part config offset : 384 kiB
00:19:30.214  [2024-12-15 10:57:19.081727] MalCache1: Part config size : 48 kiB
00:19:30.214  [2024-12-15 10:57:19.081733] MalCache1: Part runtime offset : 640 kiB
00:19:30.214  [2024-12-15 10:57:19.081740] MalCache1: Part runtime size : 72 kiB
00:19:30.214  [2024-12-15 10:57:19.081746] MalCache1: Core config offset : 768 kiB
00:19:30.214  [2024-12-15 10:57:19.081752] MalCache1: Core config size : 512 kiB
00:19:30.214  [2024-12-15 10:57:19.081759] MalCache1: Core runtime offset : 1792 kiB
00:19:30.214  [2024-12-15 10:57:19.081765] MalCache1: Core runtime size : 1172 kiB
00:19:30.214  [2024-12-15 10:57:19.081771] MalCache1: Core UUID offset : 3072 kiB
00:19:30.214  [2024-12-15 10:57:19.081778] MalCache1: Core UUID size : 16384 kiB
00:19:30.214  [2024-12-15 10:57:19.081784] MalCache1: Cleaning offset : 35840 kiB
00:19:30.214  [2024-12-15 10:57:19.081790] MalCache1: Cleaning size : 788 kiB
00:19:30.214  [2024-12-15 10:57:19.081797] MalCache1: LRU list offset : 36736 kiB
00:19:30.214  [2024-12-15 10:57:19.081803] MalCache1: LRU list size : 592 kiB
00:19:30.214  [2024-12-15 10:57:19.081809] MalCache1: Collision offset : 37376 kiB
00:19:30.214  [2024-12-15 10:57:19.081816] MalCache1: Collision size : 788 kiB
00:19:30.214  [2024-12-15 10:57:19.081822] MalCache1: List info offset : 38272 kiB
00:19:30.214  [2024-12-15 10:57:19.081828] MalCache1: List info size : 592 kiB
00:19:30.214  [2024-12-15 10:57:19.081835] MalCache1: Hash offset : 38912 kiB
00:19:30.214  [2024-12-15 10:57:19.081841] MalCache1: Hash size : 68 kiB
00:19:30.214  [2024-12-15 10:57:19.081848] MalCache1: Cache line size: 4 kiB
00:19:30.214  [2024-12-15 10:57:19.081856] MalCache1: Metadata capacity: 20 MiB
00:19:30.214  [2024-12-15 10:57:19.091454] MalCache1: Policy 'always' initialized successfully
00:19:30.473  [2024-12-15 10:57:19.302654] MalCache1: Done saving cache state!
00:19:30.473  [2024-12-15 10:57:19.333905] MalCache1: Cache attached
00:19:30.473  [2024-12-15 10:57:19.334001] MalCache1: Successfully attached
00:19:30.473  [2024-12-15 10:57:19.334292] MalCache1: Inserting core Malloc1
00:19:30.473  [2024-12-15 10:57:19.334322] MalCache1.Malloc1: Seqential cutoff init
00:19:30.473  [2024-12-15 10:57:19.365120] MalCache1.Malloc1: Successfully added
00:19:30.473  [2024-12-15 10:57:19.370944] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:19:30.473  [2024-12-15 10:57:19.371187] MalCache1: Inserting core Malloc2
00:19:30.473  [2024-12-15 10:57:19.371209] MalCache1.Malloc2: Seqential cutoff init
00:19:30.473  [2024-12-15 10:57:19.402547] MalCache1.Malloc2: Successfully added
00:19:30.473  Running I/O for 4 seconds...
00:19:34.669  
00:19:34.669                                                                                                  Latency(us)
00:19:34.669  
[2024-12-15T09:57:23.685Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:34.669  
[2024-12-15T09:57:23.685Z]  Job: MalCache1 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096)
00:19:34.669  	 MalCache1           :       4.00   23701.64      92.58       0.00     0.00    5403.65    1189.62 4026531.84
00:19:34.669  
[2024-12-15T09:57:23.685Z]  Job: MalCache2 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096)
00:19:34.669  	 MalCache2           :       4.01   23698.58      92.57       0.00     0.00    5402.05    1025.78 4026531.84
00:19:34.669  
[2024-12-15T09:57:23.685Z]  ===================================================================================================================
00:19:34.669  
[2024-12-15T09:57:23.685Z]  Total                       :              47400.22     185.16       0.00     0.00    5402.85    1025.78 4026531.84
00:19:34.669  [2024-12-15 10:57:23.441178] MalCache1: Flushing cache
00:19:34.669  [2024-12-15 10:57:23.441220] MalCache1: Flushing cache completed
00:19:34.669  [2024-12-15 10:57:23.442112] MalCache1: Stopping cache
00:19:34.669  [2024-12-15 10:57:23.630358] MalCache1: Done saving cache state!
00:19:34.669  [2024-12-15 10:57:23.647621] Cache MalCache1 successfully stopped
00:19:35.608   10:57:24	-- integrity/bdevperf-iotypes.sh@15 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -t 4 -w write
00:19:35.608    10:57:24	-- integrity/bdevperf-iotypes.sh@15 -- # gen_malloc_ocf_json
00:19:35.608    10:57:24	-- integrity/mallocs.conf@2 -- # local size=300
00:19:35.608    10:57:24	-- integrity/mallocs.conf@3 -- # local block_size=512
00:19:35.608    10:57:24	-- integrity/mallocs.conf@4 -- # local config
00:19:35.608    10:57:24	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:35.608  {
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc$malloc",
00:19:35.608      "num_blocks": $(( (size << 20) / block_size )),
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  }
00:19:35.608  JSON
00:19:35.608  )")
00:19:35.608     10:57:24	-- integrity/mallocs.conf@21 -- # cat
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:35.608  {
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc$malloc",
00:19:35.608      "num_blocks": $(( (size << 20) / block_size )),
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  }
00:19:35.608  JSON
00:19:35.608  )")
00:19:35.608     10:57:24	-- integrity/mallocs.conf@21 -- # cat
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:35.608  {
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc$malloc",
00:19:35.608      "num_blocks": $(( (size << 20) / block_size )),
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  }
00:19:35.608  JSON
00:19:35.608  )")
00:19:35.608     10:57:24	-- integrity/mallocs.conf@21 -- # cat
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:35.608    10:57:24	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:19:35.608    10:57:24	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:19:35.608    10:57:24	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:19:35.608    10:57:24	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:19:35.608    10:57:24	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:19:35.608    10:57:24	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:19:35.608    10:57:24	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:19:35.608    10:57:24	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:19:35.608    10:57:24	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:35.608    10:57:24	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:35.608  {
00:19:35.608    "method": "bdev_ocf_create",
00:19:35.608    "params": {
00:19:35.608      "name": "MalCache$ocf",
00:19:35.608      "mode": "${ocf_mode[ocf]}",
00:19:35.608      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:35.608      "core_bdev_name": "${ocf_core[ocf]}"
00:19:35.608    }
00:19:35.608  }
00:19:35.608  JSON
00:19:35.608  )")
00:19:35.608     10:57:24	-- integrity/mallocs.conf@44 -- # cat
00:19:35.608    10:57:24	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:35.608    10:57:24	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:35.608  {
00:19:35.608    "method": "bdev_ocf_create",
00:19:35.608    "params": {
00:19:35.608      "name": "MalCache$ocf",
00:19:35.608      "mode": "${ocf_mode[ocf]}",
00:19:35.608      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:35.608      "core_bdev_name": "${ocf_core[ocf]}"
00:19:35.608    }
00:19:35.608  }
00:19:35.608  JSON
00:19:35.608  )")
00:19:35.608     10:57:24	-- integrity/mallocs.conf@44 -- # cat
00:19:35.608    10:57:24	-- integrity/mallocs.conf@47 -- # jq .
00:19:35.608     10:57:24	-- integrity/mallocs.conf@47 -- # IFS=,
00:19:35.608     10:57:24	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc0",
00:19:35.608      "num_blocks": 614400,
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  },{
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc1",
00:19:35.608      "num_blocks": 614400,
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  },{
00:19:35.608    "method": "bdev_malloc_create",
00:19:35.608    "params": {
00:19:35.608      "name": "Malloc2",
00:19:35.608      "num_blocks": 614400,
00:19:35.608      "block_size": 512
00:19:35.608    }
00:19:35.608  },{
00:19:35.608    "method": "bdev_ocf_create",
00:19:35.608    "params": {
00:19:35.608      "name": "MalCache1",
00:19:35.608      "mode": "wt",
00:19:35.608      "cache_bdev_name": "Malloc0",
00:19:35.608      "core_bdev_name": "Malloc1"
00:19:35.608    }
00:19:35.608  },{
00:19:35.608    "method": "bdev_ocf_create",
00:19:35.608    "params": {
00:19:35.608      "name": "MalCache2",
00:19:35.608      "mode": "pt",
00:19:35.608      "cache_bdev_name": "Malloc0",
00:19:35.609      "core_bdev_name": "Malloc2"
00:19:35.609    }
00:19:35.609  }'
00:19:35.609  [2024-12-15 10:57:24.381249] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:35.609  [2024-12-15 10:57:24.381313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203181 ]
00:19:35.609  EAL: No free 2048 kB hugepages reported on node 1
00:19:35.609  [2024-12-15 10:57:24.473736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:35.609  [2024-12-15 10:57:24.567044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:35.868  [2024-12-15 10:57:24.744538] 'OCF_Core' volume operations registered
00:19:35.868  [2024-12-15 10:57:24.747734] 'OCF_Cache' volume operations registered
00:19:35.868  [2024-12-15 10:57:24.751349] 'OCF Composite' volume operations registered
00:19:35.868  [2024-12-15 10:57:24.754564] 'SPDK_block_device' volume operations registered
00:19:36.127  [2024-12-15 10:57:24.962286] Inserting cache MalCache1
00:19:36.127  [2024-12-15 10:57:24.962722] MalCache1: Metadata initialized
00:19:36.127  [2024-12-15 10:57:24.963168] MalCache1: Successfully added
00:19:36.127  [2024-12-15 10:57:24.963183] MalCache1: Cache mode : wt
00:19:36.127  [2024-12-15 10:57:24.973146] MalCache1: Super block config offset : 0 kiB
00:19:36.127  [2024-12-15 10:57:24.973169] MalCache1: Super block config size : 2200 B
00:19:36.127  [2024-12-15 10:57:24.973176] MalCache1: Super block runtime offset : 128 kiB
00:19:36.127  [2024-12-15 10:57:24.973183] MalCache1: Super block runtime size : 4 B
00:19:36.127  [2024-12-15 10:57:24.973190] MalCache1: Reserved offset : 256 kiB
00:19:36.127  [2024-12-15 10:57:24.973197] MalCache1: Reserved size : 128 kiB
00:19:36.127  [2024-12-15 10:57:24.973203] MalCache1: Part config offset : 384 kiB
00:19:36.127  [2024-12-15 10:57:24.973210] MalCache1: Part config size : 48 kiB
00:19:36.127  [2024-12-15 10:57:24.973216] MalCache1: Part runtime offset : 640 kiB
00:19:36.127  [2024-12-15 10:57:24.973223] MalCache1: Part runtime size : 72 kiB
00:19:36.127  [2024-12-15 10:57:24.973229] MalCache1: Core config offset : 768 kiB
00:19:36.127  [2024-12-15 10:57:24.973235] MalCache1: Core config size : 512 kiB
00:19:36.127  [2024-12-15 10:57:24.973242] MalCache1: Core runtime offset : 1792 kiB
00:19:36.127  [2024-12-15 10:57:24.973248] MalCache1: Core runtime size : 1172 kiB
00:19:36.127  [2024-12-15 10:57:24.973255] MalCache1: Core UUID offset : 3072 kiB
00:19:36.127  [2024-12-15 10:57:24.973261] MalCache1: Core UUID size : 16384 kiB
00:19:36.127  [2024-12-15 10:57:24.973268] MalCache1: Cleaning offset : 35840 kiB
00:19:36.127  [2024-12-15 10:57:24.973274] MalCache1: Cleaning size : 788 kiB
00:19:36.127  [2024-12-15 10:57:24.973281] MalCache1: LRU list offset : 36736 kiB
00:19:36.127  [2024-12-15 10:57:24.973287] MalCache1: LRU list size : 592 kiB
00:19:36.127  [2024-12-15 10:57:24.973300] MalCache1: Collision offset : 37376 kiB
00:19:36.127  [2024-12-15 10:57:24.973307] MalCache1: Collision size : 788 kiB
00:19:36.127  [2024-12-15 10:57:24.973313] MalCache1: List info offset : 38272 kiB
00:19:36.127  [2024-12-15 10:57:24.973319] MalCache1: List info size : 592 kiB
00:19:36.127  [2024-12-15 10:57:24.973326] MalCache1: Hash offset : 38912 kiB
00:19:36.127  [2024-12-15 10:57:24.973332] MalCache1: Hash size : 68 kiB
00:19:36.127  [2024-12-15 10:57:24.973339] MalCache1: Cache line size: 4 kiB
00:19:36.127  [2024-12-15 10:57:24.973348] MalCache1: Metadata capacity: 20 MiB
00:19:36.127  [2024-12-15 10:57:24.982940] MalCache1: Policy 'always' initialized successfully
00:19:36.386  [2024-12-15 10:57:25.194175] MalCache1: Done saving cache state!
00:19:36.386  [2024-12-15 10:57:25.225271] MalCache1: Cache attached
00:19:36.386  [2024-12-15 10:57:25.225367] MalCache1: Successfully attached
00:19:36.386  [2024-12-15 10:57:25.225649] MalCache1: Inserting core Malloc1
00:19:36.386  [2024-12-15 10:57:25.225674] MalCache1.Malloc1: Seqential cutoff init
00:19:36.386  [2024-12-15 10:57:25.256657] MalCache1.Malloc1: Successfully added
00:19:36.386  [2024-12-15 10:57:25.262510] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:19:36.386  [2024-12-15 10:57:25.262743] MalCache1: Inserting core Malloc2
00:19:36.386  [2024-12-15 10:57:25.262769] MalCache1.Malloc2: Seqential cutoff init
00:19:36.386  [2024-12-15 10:57:25.294325] MalCache1.Malloc2: Successfully added
00:19:36.386  Running I/O for 4 seconds...
00:19:40.580  
00:19:40.580                                                                                                  Latency(us)
00:19:40.580  
[2024-12-15T09:57:29.596Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:40.580  
[2024-12-15T09:57:29.596Z]  Job: MalCache1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:19:40.580  	 MalCache1           :       4.01   16323.52      63.76       0.00     0.00    7833.71    1431.82   10428.77
00:19:40.580  
[2024-12-15T09:57:29.596Z]  Job: MalCache2 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:19:40.580  	 MalCache2           :       4.01   16324.75      63.77       0.00     0.00    7829.24    1374.83   10029.86
00:19:40.580  
[2024-12-15T09:57:29.596Z]  ===================================================================================================================
00:19:40.580  
[2024-12-15T09:57:29.596Z]  Total                       :              32648.27     127.53       0.00     0.00    7831.47    1374.83   10428.77
00:19:40.580  [2024-12-15 10:57:29.333091] MalCache1: Flushing cache
00:19:40.580  [2024-12-15 10:57:29.333130] MalCache1: Flushing cache completed
00:19:40.580  [2024-12-15 10:57:29.334009] MalCache1: Stopping cache
00:19:40.580  [2024-12-15 10:57:29.523077] MalCache1: Done saving cache state!
00:19:40.580  [2024-12-15 10:57:29.540258] Cache MalCache1 successfully stopped
00:19:41.519  
00:19:41.519  real	0m17.965s
00:19:41.519  user	0m16.430s
00:19:41.519  sys	0m1.663s
00:19:41.519   10:57:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:41.519   10:57:30	-- common/autotest_common.sh@10 -- # set +x
00:19:41.519  ************************************
00:19:41.519  END TEST ocf_bdevperf_iotypes
00:19:41.519  ************************************
00:19:41.519   10:57:30	-- ocf/ocf.sh@13 -- # run_test ocf_stats /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh
00:19:41.519   10:57:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:41.519   10:57:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:41.519   10:57:30	-- common/autotest_common.sh@10 -- # set +x
00:19:41.519  ************************************
00:19:41.519  START TEST ocf_stats
00:19:41.519  ************************************
00:19:41.519   10:57:30	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh
00:19:41.519    10:57:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:41.519     10:57:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:41.519     10:57:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:41.519    10:57:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:41.519    10:57:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:41.519    10:57:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:41.519    10:57:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:41.519    10:57:30	-- scripts/common.sh@335 -- # IFS=.-:
00:19:41.519    10:57:30	-- scripts/common.sh@335 -- # read -ra ver1
00:19:41.519    10:57:30	-- scripts/common.sh@336 -- # IFS=.-:
00:19:41.519    10:57:30	-- scripts/common.sh@336 -- # read -ra ver2
00:19:41.519    10:57:30	-- scripts/common.sh@337 -- # local 'op=<'
00:19:41.519    10:57:30	-- scripts/common.sh@339 -- # ver1_l=2
00:19:41.519    10:57:30	-- scripts/common.sh@340 -- # ver2_l=1
00:19:41.519    10:57:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:41.519    10:57:30	-- scripts/common.sh@343 -- # case "$op" in
00:19:41.519    10:57:30	-- scripts/common.sh@344 -- # : 1
00:19:41.519    10:57:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:41.519    10:57:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:41.519     10:57:30	-- scripts/common.sh@364 -- # decimal 1
00:19:41.519     10:57:30	-- scripts/common.sh@352 -- # local d=1
00:19:41.519     10:57:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:41.519     10:57:30	-- scripts/common.sh@354 -- # echo 1
00:19:41.519    10:57:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:41.519     10:57:30	-- scripts/common.sh@365 -- # decimal 2
00:19:41.519     10:57:30	-- scripts/common.sh@352 -- # local d=2
00:19:41.519     10:57:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:41.519     10:57:30	-- scripts/common.sh@354 -- # echo 2
00:19:41.519    10:57:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:41.519    10:57:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:41.519    10:57:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:41.519    10:57:30	-- scripts/common.sh@367 -- # return 0
00:19:41.519    10:57:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:41.519    10:57:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:41.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:41.519  		--rc genhtml_branch_coverage=1
00:19:41.519  		--rc genhtml_function_coverage=1
00:19:41.519  		--rc genhtml_legend=1
00:19:41.519  		--rc geninfo_all_blocks=1
00:19:41.519  		--rc geninfo_unexecuted_blocks=1
00:19:41.519  		
00:19:41.519  		'
00:19:41.519    10:57:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:41.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:41.519  		--rc genhtml_branch_coverage=1
00:19:41.519  		--rc genhtml_function_coverage=1
00:19:41.519  		--rc genhtml_legend=1
00:19:41.519  		--rc geninfo_all_blocks=1
00:19:41.519  		--rc geninfo_unexecuted_blocks=1
00:19:41.519  		
00:19:41.519  		'
00:19:41.519    10:57:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:41.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:41.519  		--rc genhtml_branch_coverage=1
00:19:41.519  		--rc genhtml_function_coverage=1
00:19:41.519  		--rc genhtml_legend=1
00:19:41.519  		--rc geninfo_all_blocks=1
00:19:41.519  		--rc geninfo_unexecuted_blocks=1
00:19:41.519  		
00:19:41.519  		'
00:19:41.519    10:57:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:41.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:41.519  		--rc genhtml_branch_coverage=1
00:19:41.519  		--rc genhtml_function_coverage=1
00:19:41.519  		--rc genhtml_legend=1
00:19:41.519  		--rc geninfo_all_blocks=1
00:19:41.519  		--rc geninfo_unexecuted_blocks=1
00:19:41.519  		
00:19:41.519  		'
00:19:41.519   10:57:30	-- integrity/stats.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:19:41.519   10:57:30	-- integrity/stats.sh@12 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/mallocs.conf
00:19:41.519   10:57:30	-- integrity/stats.sh@14 -- # bdev_perf_pid=2203955
00:19:41.519   10:57:30	-- integrity/stats.sh@15 -- # waitforlisten 2203955
00:19:41.519   10:57:30	-- common/autotest_common.sh@829 -- # '[' -z 2203955 ']'
00:19:41.519   10:57:30	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:41.519   10:57:30	-- integrity/stats.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:19:41.519   10:57:30	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:41.519    10:57:30	-- integrity/stats.sh@13 -- # gen_malloc_ocf_json
00:19:41.519   10:57:30	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:41.519  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:41.520   10:57:30	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:41.520    10:57:30	-- integrity/mallocs.conf@2 -- # local size=300
00:19:41.520   10:57:30	-- common/autotest_common.sh@10 -- # set +x
00:19:41.520    10:57:30	-- integrity/mallocs.conf@3 -- # local block_size=512
00:19:41.520    10:57:30	-- integrity/mallocs.conf@4 -- # local config
00:19:41.520    10:57:30	-- integrity/mallocs.conf@6 -- # local malloc malloc_devs=3
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc = 0 ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:41.520  {
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc$malloc",
00:19:41.520      "num_blocks": $(( (size << 20) / block_size )),
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  }
00:19:41.520  JSON
00:19:41.520  )")
00:19:41.520     10:57:30	-- integrity/mallocs.conf@21 -- # cat
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:41.520  {
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc$malloc",
00:19:41.520      "num_blocks": $(( (size << 20) / block_size )),
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  }
00:19:41.520  JSON
00:19:41.520  )")
00:19:41.520     10:57:30	-- integrity/mallocs.conf@21 -- # cat
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@21 -- # config+=("$(cat <<-JSON
00:19:41.520  {
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc$malloc",
00:19:41.520      "num_blocks": $(( (size << 20) / block_size )),
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  }
00:19:41.520  JSON
00:19:41.520  )")
00:19:41.520     10:57:30	-- integrity/mallocs.conf@21 -- # cat
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc++  ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@7 -- # (( malloc < malloc_devs ))
00:19:41.520    10:57:30	-- integrity/mallocs.conf@24 -- # local ocfs ocf ocf_mode ocf_cache ocf_core
00:19:41.520    10:57:30	-- integrity/mallocs.conf@25 -- # ocfs=(1 2)
00:19:41.520    10:57:30	-- integrity/mallocs.conf@26 -- # ocf_mode[1]=wt
00:19:41.520    10:57:30	-- integrity/mallocs.conf@26 -- # ocf_cache[1]=Malloc0
00:19:41.520    10:57:30	-- integrity/mallocs.conf@26 -- # ocf_core[1]=Malloc1
00:19:41.520    10:57:30	-- integrity/mallocs.conf@27 -- # ocf_mode[2]=pt
00:19:41.520    10:57:30	-- integrity/mallocs.conf@27 -- # ocf_cache[2]=Malloc0
00:19:41.520    10:57:30	-- integrity/mallocs.conf@27 -- # ocf_core[2]=Malloc2
00:19:41.520    10:57:30	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:41.520    10:57:30	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:41.520  {
00:19:41.520    "method": "bdev_ocf_create",
00:19:41.520    "params": {
00:19:41.520      "name": "MalCache$ocf",
00:19:41.520      "mode": "${ocf_mode[ocf]}",
00:19:41.520      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:41.520      "core_bdev_name": "${ocf_core[ocf]}"
00:19:41.520    }
00:19:41.520  }
00:19:41.520  JSON
00:19:41.520  )")
00:19:41.520     10:57:30	-- integrity/mallocs.conf@44 -- # cat
00:19:41.520    10:57:30	-- integrity/mallocs.conf@29 -- # for ocf in "${ocfs[@]}"
00:19:41.520    10:57:30	-- integrity/mallocs.conf@44 -- # config+=("$(cat <<-JSON
00:19:41.520  {
00:19:41.520    "method": "bdev_ocf_create",
00:19:41.520    "params": {
00:19:41.520      "name": "MalCache$ocf",
00:19:41.520      "mode": "${ocf_mode[ocf]}",
00:19:41.520      "cache_bdev_name": "${ocf_cache[ocf]}",
00:19:41.520      "core_bdev_name": "${ocf_core[ocf]}"
00:19:41.520    }
00:19:41.520  }
00:19:41.520  JSON
00:19:41.520  )")
00:19:41.520     10:57:30	-- integrity/mallocs.conf@44 -- # cat
00:19:41.520    10:57:30	-- integrity/mallocs.conf@47 -- # jq .
00:19:41.520     10:57:30	-- integrity/mallocs.conf@47 -- # IFS=,
00:19:41.520     10:57:30	-- integrity/mallocs.conf@47 -- # printf '%s\n' '{
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc0",
00:19:41.520      "num_blocks": 614400,
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  },{
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc1",
00:19:41.520      "num_blocks": 614400,
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  },{
00:19:41.520    "method": "bdev_malloc_create",
00:19:41.520    "params": {
00:19:41.520      "name": "Malloc2",
00:19:41.520      "num_blocks": 614400,
00:19:41.520      "block_size": 512
00:19:41.520    }
00:19:41.520  },{
00:19:41.520    "method": "bdev_ocf_create",
00:19:41.520    "params": {
00:19:41.520      "name": "MalCache1",
00:19:41.520      "mode": "wt",
00:19:41.520      "cache_bdev_name": "Malloc0",
00:19:41.520      "core_bdev_name": "Malloc1"
00:19:41.520    }
00:19:41.520  },{
00:19:41.520    "method": "bdev_ocf_create",
00:19:41.520    "params": {
00:19:41.520      "name": "MalCache2",
00:19:41.520      "mode": "pt",
00:19:41.520      "cache_bdev_name": "Malloc0",
00:19:41.520      "core_bdev_name": "Malloc2"
00:19:41.520    }
00:19:41.520  }'
00:19:41.520  [2024-12-15 10:57:30.474945] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:41.520  [2024-12-15 10:57:30.475019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203955 ]
00:19:41.520  EAL: No free 2048 kB hugepages reported on node 1
00:19:41.779  [2024-12-15 10:57:30.581110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:41.779  [2024-12-15 10:57:30.681689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:42.038  [2024-12-15 10:57:30.867332] 'OCF_Core' volume operations registered
00:19:42.038  [2024-12-15 10:57:30.870824] 'OCF_Cache' volume operations registered
00:19:42.038  [2024-12-15 10:57:30.874787] 'OCF Composite' volume operations registered
00:19:42.038  [2024-12-15 10:57:30.878290] 'SPDK_block_device' volume operations registered
00:19:42.298  [2024-12-15 10:57:31.122194] Inserting cache MalCache1
00:19:42.298  [2024-12-15 10:57:31.122703] MalCache1: Metadata initialized
00:19:42.298  [2024-12-15 10:57:31.123151] MalCache1: Successfully added
00:19:42.298  [2024-12-15 10:57:31.123167] MalCache1: Cache mode : wt
00:19:42.298  [2024-12-15 10:57:31.134190] MalCache1: Super block config offset : 0 kiB
00:19:42.298  [2024-12-15 10:57:31.134213] MalCache1: Super block config size : 2200 B
00:19:42.298  [2024-12-15 10:57:31.134221] MalCache1: Super block runtime offset : 128 kiB
00:19:42.298  [2024-12-15 10:57:31.134227] MalCache1: Super block runtime size : 4 B
00:19:42.298  [2024-12-15 10:57:31.134234] MalCache1: Reserved offset : 256 kiB
00:19:42.298  [2024-12-15 10:57:31.134240] MalCache1: Reserved size : 128 kiB
00:19:42.298  [2024-12-15 10:57:31.134247] MalCache1: Part config offset : 384 kiB
00:19:42.298  [2024-12-15 10:57:31.134253] MalCache1: Part config size : 48 kiB
00:19:42.298  [2024-12-15 10:57:31.134260] MalCache1: Part runtime offset : 640 kiB
00:19:42.298  [2024-12-15 10:57:31.134266] MalCache1: Part runtime size : 72 kiB
00:19:42.298  [2024-12-15 10:57:31.134272] MalCache1: Core config offset : 768 kiB
00:19:42.298  [2024-12-15 10:57:31.134279] MalCache1: Core config size : 512 kiB
00:19:42.298  [2024-12-15 10:57:31.134285] MalCache1: Core runtime offset : 1792 kiB
00:19:42.298  [2024-12-15 10:57:31.134292] MalCache1: Core runtime size : 1172 kiB
00:19:42.298  [2024-12-15 10:57:31.134298] MalCache1: Core UUID offset : 3072 kiB
00:19:42.298  [2024-12-15 10:57:31.134305] MalCache1: Core UUID size : 16384 kiB
00:19:42.298  [2024-12-15 10:57:31.134311] MalCache1: Cleaning offset : 35840 kiB
00:19:42.298  [2024-12-15 10:57:31.134317] MalCache1: Cleaning size : 788 kiB
00:19:42.298  [2024-12-15 10:57:31.134324] MalCache1: LRU list offset : 36736 kiB
00:19:42.298  [2024-12-15 10:57:31.134330] MalCache1: LRU list size : 592 kiB
00:19:42.298  [2024-12-15 10:57:31.134337] MalCache1: Collision offset : 37376 kiB
00:19:42.298  [2024-12-15 10:57:31.134343] MalCache1: Collision size : 788 kiB
00:19:42.298  [2024-12-15 10:57:31.134349] MalCache1: List info offset : 38272 kiB
00:19:42.298  [2024-12-15 10:57:31.134356] MalCache1: List info size : 592 kiB
00:19:42.298  [2024-12-15 10:57:31.134362] MalCache1: Hash offset : 38912 kiB
00:19:42.298  [2024-12-15 10:57:31.134368] MalCache1: Hash size : 68 kiB
00:19:42.298  [2024-12-15 10:57:31.134375] MalCache1: Cache line size: 4 kiB
00:19:42.298  [2024-12-15 10:57:31.134384] MalCache1: Metadata capacity: 20 MiB
00:19:42.298  [2024-12-15 10:57:31.144951] MalCache1: Policy 'always' initialized successfully
00:19:42.558  [2024-12-15 10:57:31.356610] MalCache1: Done saving cache state!
00:19:42.558  [2024-12-15 10:57:31.387650] MalCache1: Cache attached
00:19:42.558  [2024-12-15 10:57:31.387746] MalCache1: Successfully attached
00:19:42.558  [2024-12-15 10:57:31.388045] MalCache1: Inserting core Malloc1
00:19:42.558  [2024-12-15 10:57:31.388067] MalCache1.Malloc1: Seqential cutoff init
00:19:42.558  [2024-12-15 10:57:31.418939] MalCache1.Malloc1: Successfully added
00:19:42.558  [2024-12-15 10:57:31.424818] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev MalCache2 connects to existing cache device Malloc0
00:19:42.558  [2024-12-15 10:57:31.425072] MalCache1: Inserting core Malloc2
00:19:42.558  [2024-12-15 10:57:31.425094] MalCache1.Malloc2: Seqential cutoff init
00:19:42.558  [2024-12-15 10:57:31.456321] MalCache1.Malloc2: Successfully added
00:19:42.558  Running I/O for 120 seconds...
00:19:43.496   10:57:32	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:43.496   10:57:32	-- common/autotest_common.sh@862 -- # return 0
00:19:43.496   10:57:32	-- integrity/stats.sh@16 -- # sleep 1
00:19:44.436   10:57:33	-- integrity/stats.sh@17 -- # rpc_cmd bdev_ocf_get_stats MalCache1
00:19:44.436   10:57:33	-- common/autotest_common.sh@561 -- # xtrace_disable
00:19:44.436   10:57:33	-- common/autotest_common.sh@10 -- # set +x
00:19:44.436  {
00:19:44.436  "usage": {
00:19:44.436  "occupancy": {
00:19:44.436  "count": 22496,
00:19:44.436  "percentage": "33.55",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "free": {
00:19:44.436  "count": 22048,
00:19:44.436  "percentage": "32.88",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "clean": {
00:19:44.436  "count": 22496,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "dirty": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  }
00:19:44.436  },
00:19:44.436  "requests": {
00:19:44.436  "rd_hits": {
00:19:44.436  "count": 2,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "rd_partial_misses": {
00:19:44.436  "count": 1,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "rd_full_misses": {
00:19:44.436  "count": 1,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "rd_total": {
00:19:44.436  "count": 4,
00:19:44.436  "percentage": "0.1",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "wr_hits": {
00:19:44.436  "count": 8,
00:19:44.436  "percentage": "0.3",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "wr_partial_misses": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "wr_full_misses": {
00:19:44.436  "count": 22488,
00:19:44.436  "percentage": "99.94",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "wr_total": {
00:19:44.436  "count": 22496,
00:19:44.436  "percentage": "99.98",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "rd_pt": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "wr_pt": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "serviced": {
00:19:44.436  "count": 22500,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "total": {
00:19:44.436  "count": 22500,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "Requests"
00:19:44.436  }
00:19:44.436  },
00:19:44.436  "blocks": {
00:19:44.436  "core_volume_rd": {
00:19:44.436  "count": 9,
00:19:44.436  "percentage": "0.3",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "core_volume_wr": {
00:19:44.436  "count": 22496,
00:19:44.436  "percentage": "99.96",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "core_volume_total": {
00:19:44.436  "count": 22505,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "cache_volume_rd": {
00:19:44.436  "count": 2,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "cache_volume_wr": {
00:19:44.436  "count": 22505,
00:19:44.436  "percentage": "99.99",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "cache_volume_total": {
00:19:44.436  "count": 22507,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "volume_rd": {
00:19:44.436  "count": 11,
00:19:44.436  "percentage": "0.4",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "volume_wr": {
00:19:44.436  "count": 22496,
00:19:44.436  "percentage": "99.95",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  },
00:19:44.436  "volume_total": {
00:19:44.436  "count": 22507,
00:19:44.436  "percentage": "100.0",
00:19:44.436  "units": "4KiB blocks"
00:19:44.436  }
00:19:44.436  },
00:19:44.436  "errors": {
00:19:44.436  "core_volume_rd": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "core_volume_wr": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "core_volume_total": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "cache_volume_rd": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "cache_volume_wr": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "cache_volume_total": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  },
00:19:44.436  "total": {
00:19:44.436  "count": 0,
00:19:44.436  "percentage": "0.0",
00:19:44.436  "units": "Requests"
00:19:44.436  }
00:19:44.436  }
00:19:44.436  }
00:19:44.436   10:57:33	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:19:44.436   10:57:33	-- integrity/stats.sh@18 -- # kill -9 2203955
00:19:44.436   10:57:33	-- integrity/stats.sh@19 -- # wait 2203955
00:19:44.436  /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/stats.sh: line 19: 2203955 Killed                  $bdevperf --json <(gen_malloc_ocf_json) -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:19:44.436   10:57:33	-- integrity/stats.sh@19 -- # true
00:19:44.436  
00:19:44.436  real	0m2.976s
00:19:44.436  user	0m3.021s
00:19:44.436  sys	0m0.696s
00:19:44.436   10:57:33	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:44.436   10:57:33	-- common/autotest_common.sh@10 -- # set +x
00:19:44.436  ************************************
00:19:44.436  END TEST ocf_stats
00:19:44.436  ************************************
00:19:44.436   10:57:33	-- ocf/ocf.sh@14 -- # run_test ocf_flush /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh
00:19:44.436   10:57:33	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:44.436   10:57:33	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:44.436   10:57:33	-- common/autotest_common.sh@10 -- # set +x
00:19:44.436  ************************************
00:19:44.436  START TEST ocf_flush
00:19:44.436  ************************************
00:19:44.436   10:57:33	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/integrity/flush.sh
00:19:44.436    10:57:33	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:44.436     10:57:33	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:44.436     10:57:33	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:44.436    10:57:33	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:44.436    10:57:33	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:44.436    10:57:33	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:44.436    10:57:33	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:44.436    10:57:33	-- scripts/common.sh@335 -- # IFS=.-:
00:19:44.436    10:57:33	-- scripts/common.sh@335 -- # read -ra ver1
00:19:44.436    10:57:33	-- scripts/common.sh@336 -- # IFS=.-:
00:19:44.436    10:57:33	-- scripts/common.sh@336 -- # read -ra ver2
00:19:44.436    10:57:33	-- scripts/common.sh@337 -- # local 'op=<'
00:19:44.436    10:57:33	-- scripts/common.sh@339 -- # ver1_l=2
00:19:44.436    10:57:33	-- scripts/common.sh@340 -- # ver2_l=1
00:19:44.436    10:57:33	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:44.436    10:57:33	-- scripts/common.sh@343 -- # case "$op" in
00:19:44.436    10:57:33	-- scripts/common.sh@344 -- # : 1
00:19:44.437    10:57:33	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:44.437    10:57:33	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:44.437     10:57:33	-- scripts/common.sh@364 -- # decimal 1
00:19:44.437     10:57:33	-- scripts/common.sh@352 -- # local d=1
00:19:44.437     10:57:33	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:44.437     10:57:33	-- scripts/common.sh@354 -- # echo 1
00:19:44.437    10:57:33	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:44.437     10:57:33	-- scripts/common.sh@365 -- # decimal 2
00:19:44.437     10:57:33	-- scripts/common.sh@352 -- # local d=2
00:19:44.437     10:57:33	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:44.437     10:57:33	-- scripts/common.sh@354 -- # echo 2
00:19:44.437    10:57:33	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:44.437    10:57:33	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:44.437    10:57:33	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:44.437    10:57:33	-- scripts/common.sh@367 -- # return 0
00:19:44.437    10:57:33	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:44.437    10:57:33	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:44.437  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:44.437  		--rc genhtml_branch_coverage=1
00:19:44.437  		--rc genhtml_function_coverage=1
00:19:44.437  		--rc genhtml_legend=1
00:19:44.437  		--rc geninfo_all_blocks=1
00:19:44.437  		--rc geninfo_unexecuted_blocks=1
00:19:44.437  		
00:19:44.437  		'
00:19:44.437    10:57:33	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:44.437  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:44.437  		--rc genhtml_branch_coverage=1
00:19:44.437  		--rc genhtml_function_coverage=1
00:19:44.437  		--rc genhtml_legend=1
00:19:44.437  		--rc geninfo_all_blocks=1
00:19:44.437  		--rc geninfo_unexecuted_blocks=1
00:19:44.437  		
00:19:44.437  		'
00:19:44.437    10:57:33	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:44.437  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:44.437  		--rc genhtml_branch_coverage=1
00:19:44.437  		--rc genhtml_function_coverage=1
00:19:44.437  		--rc genhtml_legend=1
00:19:44.437  		--rc geninfo_all_blocks=1
00:19:44.437  		--rc geninfo_unexecuted_blocks=1
00:19:44.437  		
00:19:44.437  		'
00:19:44.437    10:57:33	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:44.437  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:44.437  		--rc genhtml_branch_coverage=1
00:19:44.437  		--rc genhtml_function_coverage=1
00:19:44.437  		--rc genhtml_legend=1
00:19:44.437  		--rc geninfo_all_blocks=1
00:19:44.437  		--rc geninfo_unexecuted_blocks=1
00:19:44.437  		
00:19:44.437  		'
00:19:44.437   10:57:33	-- integrity/flush.sh@10 -- # bdevperf=/var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf
00:19:44.437   10:57:33	-- integrity/flush.sh@11 -- # rpc_py='/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock'
00:19:44.437   10:57:33	-- integrity/flush.sh@73 -- # bdevperf_pid=2204394
00:19:44.437   10:57:33	-- integrity/flush.sh@74 -- # trap 'killprocess $bdevperf_pid' SIGINT SIGTERM EXIT
00:19:44.437   10:57:33	-- integrity/flush.sh@75 -- # waitforlisten 2204394
00:19:44.437   10:57:33	-- common/autotest_common.sh@829 -- # '[' -z 2204394 ']'
00:19:44.437   10:57:33	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:44.437   10:57:33	-- integrity/flush.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w write -t 120 -r /var/tmp/spdk.sock
00:19:44.437    10:57:33	-- integrity/flush.sh@72 -- # bdevperf_config
00:19:44.437   10:57:33	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:44.437   10:57:33	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:44.437  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:44.437    10:57:33	-- integrity/flush.sh@19 -- # local config
00:19:44.437   10:57:33	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:44.437   10:57:33	-- common/autotest_common.sh@10 -- # set +x
00:19:44.437     10:57:33	-- integrity/flush.sh@50 -- # cat
00:19:44.437    10:57:33	-- integrity/flush.sh@50 -- # config='{
00:19:44.437    "method": "bdev_malloc_create",
00:19:44.437    "params": {
00:19:44.437  "name": "Malloc0",
00:19:44.437  "num_blocks": 102400,
00:19:44.437  "block_size": 512
00:19:44.437    }
00:19:44.437  },
00:19:44.437  {
00:19:44.437    "method": "bdev_malloc_create",
00:19:44.437    "params": {
00:19:44.437  "name": "Malloc1",
00:19:44.437  "num_blocks": 1024000,
00:19:44.437  "block_size": 512
00:19:44.437    }
00:19:44.437  },
00:19:44.437  {
00:19:44.437    "method": "bdev_ocf_create",
00:19:44.437    "params": {
00:19:44.437  "name": "MalCache0",
00:19:44.437  "mode": "wb",
00:19:44.437  "cache_line_size": 4,
00:19:44.437  "cache_bdev_name": "Malloc0",
00:19:44.437  "core_bdev_name": "Malloc1"
00:19:44.437    }
00:19:44.437  }'
00:19:44.437    10:57:33	-- integrity/flush.sh@52 -- # jq .
00:19:44.745     10:57:33	-- integrity/flush.sh@53 -- # IFS=,
00:19:44.745     10:57:33	-- integrity/flush.sh@54 -- # printf '%s\n' '{
00:19:44.745    "method": "bdev_malloc_create",
00:19:44.745    "params": {
00:19:44.745  "name": "Malloc0",
00:19:44.745  "num_blocks": 102400,
00:19:44.745  "block_size": 512
00:19:44.745    }
00:19:44.745  },
00:19:44.745  {
00:19:44.745    "method": "bdev_malloc_create",
00:19:44.745    "params": {
00:19:44.745  "name": "Malloc1",
00:19:44.745  "num_blocks": 1024000,
00:19:44.745  "block_size": 512
00:19:44.745    }
00:19:44.745  },
00:19:44.745  {
00:19:44.745    "method": "bdev_ocf_create",
00:19:44.745    "params": {
00:19:44.745  "name": "MalCache0",
00:19:44.745  "mode": "wb",
00:19:44.745  "cache_line_size": 4,
00:19:44.745  "cache_bdev_name": "Malloc0",
00:19:44.745  "core_bdev_name": "Malloc1"
00:19:44.745    }
00:19:44.745  }'
00:19:44.745  [2024-12-15 10:57:33.490703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:44.745  [2024-12-15 10:57:33.490778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2204394 ]
00:19:44.745  EAL: No free 2048 kB hugepages reported on node 1
00:19:44.745  [2024-12-15 10:57:33.587842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:44.745  [2024-12-15 10:57:33.692030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:45.015  [2024-12-15 10:57:33.879734] 'OCF_Core' volume operations registered
00:19:45.015  [2024-12-15 10:57:33.882937] 'OCF_Cache' volume operations registered
00:19:45.015  [2024-12-15 10:57:33.886529] 'OCF Composite' volume operations registered
00:19:45.015  [2024-12-15 10:57:33.889753] 'SPDK_block_device' volume operations registered
00:19:45.284  [2024-12-15 10:57:34.029553] Inserting cache MalCache0
00:19:45.284  [2024-12-15 10:57:34.029986] MalCache0: Metadata initialized
00:19:45.284  [2024-12-15 10:57:34.030433] MalCache0: Successfully added
00:19:45.284  [2024-12-15 10:57:34.030448] MalCache0: Cache mode : wb
00:19:45.284  [2024-12-15 10:57:34.040124] MalCache0: Super block config offset : 0 kiB
00:19:45.284  [2024-12-15 10:57:34.040144] MalCache0: Super block config size : 2200 B
00:19:45.284  [2024-12-15 10:57:34.040151] MalCache0: Super block runtime offset : 128 kiB
00:19:45.284  [2024-12-15 10:57:34.040158] MalCache0: Super block runtime size : 4 B
00:19:45.284  [2024-12-15 10:57:34.040164] MalCache0: Reserved offset : 256 kiB
00:19:45.284  [2024-12-15 10:57:34.040171] MalCache0: Reserved size : 128 kiB
00:19:45.284  [2024-12-15 10:57:34.040177] MalCache0: Part config offset : 384 kiB
00:19:45.284  [2024-12-15 10:57:34.040184] MalCache0: Part config size : 48 kiB
00:19:45.284  [2024-12-15 10:57:34.040190] MalCache0: Part runtime offset : 640 kiB
00:19:45.284  [2024-12-15 10:57:34.040197] MalCache0: Part runtime size : 72 kiB
00:19:45.284  [2024-12-15 10:57:34.040203] MalCache0: Core config offset : 768 kiB
00:19:45.284  [2024-12-15 10:57:34.040209] MalCache0: Core config size : 512 kiB
00:19:45.284  [2024-12-15 10:57:34.040216] MalCache0: Core runtime offset : 1792 kiB
00:19:45.284  [2024-12-15 10:57:34.040222] MalCache0: Core runtime size : 1172 kiB
00:19:45.284  [2024-12-15 10:57:34.040229] MalCache0: Core UUID offset : 3072 kiB
00:19:45.284  [2024-12-15 10:57:34.040235] MalCache0: Core UUID size : 16384 kiB
00:19:45.284  [2024-12-15 10:57:34.040242] MalCache0: Cleaning offset : 35840 kiB
00:19:45.284  [2024-12-15 10:57:34.040248] MalCache0: Cleaning size : 44 kiB
00:19:45.284  [2024-12-15 10:57:34.040254] MalCache0: LRU list offset : 35968 kiB
00:19:45.284  [2024-12-15 10:57:34.040261] MalCache0: LRU list size : 36 kiB
00:19:45.284  [2024-12-15 10:57:34.040267] MalCache0: Collision offset : 36096 kiB
00:19:45.284  [2024-12-15 10:57:34.040274] MalCache0: Collision size : 44 kiB
00:19:45.284  [2024-12-15 10:57:34.040280] MalCache0: List info offset : 36224 kiB
00:19:45.284  [2024-12-15 10:57:34.040287] MalCache0: List info size : 36 kiB
00:19:45.284  [2024-12-15 10:57:34.040293] MalCache0: Hash offset : 36352 kiB
00:19:45.284  [2024-12-15 10:57:34.040299] MalCache0: Hash size : 4 kiB
00:19:45.284  [2024-12-15 10:57:34.040307] MalCache0: Cache line size: 4 kiB
00:19:45.284  [2024-12-15 10:57:34.040315] MalCache0: Metadata capacity: 18 MiB
00:19:45.284  [2024-12-15 10:57:34.049715] MalCache0: Policy 'always' initialized successfully
00:19:45.284  [2024-12-15 10:57:34.137608] MalCache0: Done saving cache state!
00:19:45.284  [2024-12-15 10:57:34.168642] MalCache0: Cache attached
00:19:45.284  [2024-12-15 10:57:34.168738] MalCache0: Successfully attached
00:19:45.284  [2024-12-15 10:57:34.169009] MalCache0: Inserting core Malloc1
00:19:45.284  [2024-12-15 10:57:34.169031] MalCache0.Malloc1: Seqential cutoff init
00:19:45.284  [2024-12-15 10:57:34.199785] MalCache0.Malloc1: Successfully added
00:19:45.284  Running I/O for 120 seconds...
00:19:45.543   10:57:34	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:45.543   10:57:34	-- common/autotest_common.sh@862 -- # return 0
00:19:45.543   10:57:34	-- integrity/flush.sh@76 -- # sleep 5
00:19:50.822   10:57:39	-- integrity/flush.sh@78 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_start MalCache0
00:19:50.822  [2024-12-15 10:57:39.686957] MalCache0: Flushing cache
00:19:50.822   10:57:39	-- integrity/flush.sh@79 -- # sleep 1
00:19:50.822  [2024-12-15 10:57:39.794628] MalCache0: Flushing cache completed
00:19:51.760   10:57:40	-- integrity/flush.sh@81 -- # check_flush_in_progress
00:19:51.760   10:57:40	-- integrity/flush.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0
00:19:51.760   10:57:40	-- integrity/flush.sh@15 -- # jq -e .in_progress
00:19:52.020   10:57:40	-- integrity/flush.sh@84 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk.sock bdev_ocf_flush_status MalCache0
00:19:52.020   10:57:40	-- integrity/flush.sh@84 -- # jq -e '.status == 0'
00:19:52.588  true
00:19:52.588   10:57:41	-- integrity/flush.sh@1 -- # killprocess 2204394
00:19:52.588   10:57:41	-- common/autotest_common.sh@936 -- # '[' -z 2204394 ']'
00:19:52.588   10:57:41	-- common/autotest_common.sh@940 -- # kill -0 2204394
00:19:52.588    10:57:41	-- common/autotest_common.sh@941 -- # uname
00:19:52.588   10:57:41	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:19:52.588    10:57:41	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2204394
00:19:52.588   10:57:41	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:19:52.588   10:57:41	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:19:52.588   10:57:41	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2204394'
00:19:52.588  killing process with pid 2204394
00:19:52.588   10:57:41	-- common/autotest_common.sh@955 -- # kill 2204394
00:19:52.588   10:57:41	-- common/autotest_common.sh@960 -- # wait 2204394
00:19:52.588  Received shutdown signal, test time was about 7.355612 seconds
00:19:52.588  
00:19:52.588                                                                                                  Latency(us)
00:19:52.588  
[2024-12-15T09:57:41.604Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:19:52.588  
[2024-12-15T09:57:41.604Z]  Job: MalCache0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:19:52.588  	 MalCache0           :       7.35   40791.05     159.34       0.00     0.00    3133.85     145.14   92092.33
00:19:52.588  
[2024-12-15T09:57:41.604Z]  ===================================================================================================================
00:19:52.588  
[2024-12-15T09:57:41.604Z]  Total                       :              40791.05     159.34       0.00     0.00    3133.85     145.14   92092.33
00:19:52.588  [2024-12-15 10:57:41.589416] MalCache0: Flushing cache
00:19:52.848  [2024-12-15 10:57:41.678701] MalCache0: Flushing cache completed
00:19:52.848  [2024-12-15 10:57:41.678774] MalCache0: Stopping cache
00:19:52.848  [2024-12-15 10:57:41.765273] MalCache0: Done saving cache state!
00:19:52.848  [2024-12-15 10:57:41.783541] Cache MalCache0 successfully stopped
00:19:53.416  
00:19:53.416  real	0m9.092s
00:19:53.416  user	0m9.936s
00:19:53.416  sys	0m0.748s
00:19:53.416   10:57:42	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:19:53.416   10:57:42	-- common/autotest_common.sh@10 -- # set +x
00:19:53.416  ************************************
00:19:53.416  END TEST ocf_flush
00:19:53.416  ************************************
00:19:53.416   10:57:42	-- ocf/ocf.sh@15 -- # run_test ocf_create_destruct /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh
00:19:53.416   10:57:42	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:19:53.416   10:57:42	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:19:53.416   10:57:42	-- common/autotest_common.sh@10 -- # set +x
00:19:53.416  ************************************
00:19:53.416  START TEST ocf_create_destruct
00:19:53.416  ************************************
00:19:53.416   10:57:42	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/create-destruct.sh
00:19:53.684    10:57:42	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:19:53.685     10:57:42	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:19:53.685     10:57:42	-- common/autotest_common.sh@1690 -- # lcov --version
00:19:53.685    10:57:42	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:19:53.685    10:57:42	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:19:53.685    10:57:42	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:19:53.685    10:57:42	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:19:53.685    10:57:42	-- scripts/common.sh@335 -- # IFS=.-:
00:19:53.685    10:57:42	-- scripts/common.sh@335 -- # read -ra ver1
00:19:53.685    10:57:42	-- scripts/common.sh@336 -- # IFS=.-:
00:19:53.685    10:57:42	-- scripts/common.sh@336 -- # read -ra ver2
00:19:53.685    10:57:42	-- scripts/common.sh@337 -- # local 'op=<'
00:19:53.685    10:57:42	-- scripts/common.sh@339 -- # ver1_l=2
00:19:53.685    10:57:42	-- scripts/common.sh@340 -- # ver2_l=1
00:19:53.685    10:57:42	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:19:53.685    10:57:42	-- scripts/common.sh@343 -- # case "$op" in
00:19:53.685    10:57:42	-- scripts/common.sh@344 -- # : 1
00:19:53.685    10:57:42	-- scripts/common.sh@363 -- # (( v = 0 ))
00:19:53.685    10:57:42	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:53.685     10:57:42	-- scripts/common.sh@364 -- # decimal 1
00:19:53.685     10:57:42	-- scripts/common.sh@352 -- # local d=1
00:19:53.685     10:57:42	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:53.685     10:57:42	-- scripts/common.sh@354 -- # echo 1
00:19:53.685    10:57:42	-- scripts/common.sh@364 -- # ver1[v]=1
00:19:53.685     10:57:42	-- scripts/common.sh@365 -- # decimal 2
00:19:53.685     10:57:42	-- scripts/common.sh@352 -- # local d=2
00:19:53.685     10:57:42	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:53.685     10:57:42	-- scripts/common.sh@354 -- # echo 2
00:19:53.685    10:57:42	-- scripts/common.sh@365 -- # ver2[v]=2
00:19:53.685    10:57:42	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:19:53.685    10:57:42	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:19:53.685    10:57:42	-- scripts/common.sh@367 -- # return 0
00:19:53.685    10:57:42	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:53.685    10:57:42	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:19:53.685  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:53.685  		--rc genhtml_branch_coverage=1
00:19:53.685  		--rc genhtml_function_coverage=1
00:19:53.685  		--rc genhtml_legend=1
00:19:53.685  		--rc geninfo_all_blocks=1
00:19:53.685  		--rc geninfo_unexecuted_blocks=1
00:19:53.685  		
00:19:53.685  		'
00:19:53.685    10:57:42	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:19:53.685  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:53.685  		--rc genhtml_branch_coverage=1
00:19:53.685  		--rc genhtml_function_coverage=1
00:19:53.685  		--rc genhtml_legend=1
00:19:53.685  		--rc geninfo_all_blocks=1
00:19:53.685  		--rc geninfo_unexecuted_blocks=1
00:19:53.685  		
00:19:53.685  		'
00:19:53.685    10:57:42	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:19:53.685  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:53.685  		--rc genhtml_branch_coverage=1
00:19:53.685  		--rc genhtml_function_coverage=1
00:19:53.685  		--rc genhtml_legend=1
00:19:53.685  		--rc geninfo_all_blocks=1
00:19:53.685  		--rc geninfo_unexecuted_blocks=1
00:19:53.685  		
00:19:53.685  		'
00:19:53.686    10:57:42	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:19:53.686  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:53.686  		--rc genhtml_branch_coverage=1
00:19:53.686  		--rc genhtml_function_coverage=1
00:19:53.686  		--rc genhtml_legend=1
00:19:53.686  		--rc geninfo_all_blocks=1
00:19:53.686  		--rc geninfo_unexecuted_blocks=1
00:19:53.686  		
00:19:53.686  		'
00:19:53.686   10:57:42	-- management/create-destruct.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:19:53.686   10:57:42	-- management/create-destruct.sh@21 -- # spdk_pid=2205688
00:19:53.686   10:57:42	-- management/create-destruct.sh@23 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:19:53.686   10:57:42	-- management/create-destruct.sh@25 -- # waitforlisten 2205688
00:19:53.686   10:57:42	-- common/autotest_common.sh@829 -- # '[' -z 2205688 ']'
00:19:53.686   10:57:42	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:53.686   10:57:42	-- common/autotest_common.sh@834 -- # local max_retries=100
00:19:53.686   10:57:42	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:53.686  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:53.686   10:57:42	-- management/create-destruct.sh@20 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:19:53.686   10:57:42	-- common/autotest_common.sh@838 -- # xtrace_disable
00:19:53.686   10:57:42	-- common/autotest_common.sh@10 -- # set +x
00:19:53.686  [2024-12-15 10:57:42.631485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:19:53.686  [2024-12-15 10:57:42.631559] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2205688 ]
00:19:53.686  EAL: No free 2048 kB hugepages reported on node 1
00:19:53.950  [2024-12-15 10:57:42.736413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:19:53.950  [2024-12-15 10:57:42.837642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:19:54.209  [2024-12-15 10:57:43.030371] 'OCF_Core' volume operations registered
00:19:54.209  [2024-12-15 10:57:43.033582] 'OCF_Cache' volume operations registered
00:19:54.209  [2024-12-15 10:57:43.037188] 'OCF Composite' volume operations registered
00:19:54.209  [2024-12-15 10:57:43.040447] 'SPDK_block_device' volume operations registered
00:19:55.146   10:57:43	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:19:55.146   10:57:43	-- common/autotest_common.sh@862 -- # return 0
00:19:55.146   10:57:43	-- management/create-destruct.sh@27 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:19:55.406  Malloc0
00:19:55.666   10:57:44	-- management/create-destruct.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:19:56.242  Malloc1
00:19:56.242   10:57:44	-- management/create-destruct.sh@30 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt Malloc0 NonExisting
00:19:56.242  [2024-12-15 10:57:45.208636] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for core device 'NonExisting' to connect
00:19:56.242  PartCache
00:19:56.242   10:57:45	-- management/create-destruct.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs PartCache
00:19:56.242   10:57:45	-- management/create-destruct.sh@32 -- # jq -e '.[0] | .started == false and .cache.attached and .core.attached == false'
00:19:56.807  true
00:19:56.807   10:57:45	-- management/create-destruct.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs NonExisting
00:19:56.807   10:57:45	-- management/create-destruct.sh@35 -- # jq -e '.[0] | .name == "PartCache"'
00:19:57.065  true
00:19:57.065   10:57:46	-- management/create-destruct.sh@38 -- # bdev_check_claimed Malloc0
00:19:57.065    10:57:46	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:19:57.065    10:57:46	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:19:57.323   10:57:46	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:19:57.323   10:57:46	-- management/create-destruct.sh@14 -- # return 0
00:19:57.323   10:57:46	-- management/create-destruct.sh@43 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete PartCache
00:19:57.581   10:57:46	-- management/create-destruct.sh@44 -- # bdev_check_claimed Malloc0
00:19:57.581    10:57:46	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:19:57.582    10:57:46	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:19:57.840   10:57:46	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:19:57.840   10:57:46	-- management/create-destruct.sh@16 -- # return 1
00:19:57.840   10:57:46	-- management/create-destruct.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create FullCache wt Malloc0 Malloc1
00:19:58.098  [2024-12-15 10:57:46.987283] Inserting cache FullCache
00:19:58.098  [2024-12-15 10:57:46.987716] FullCache: Metadata initialized
00:19:58.098  [2024-12-15 10:57:46.988159] FullCache: Successfully added
00:19:58.098  [2024-12-15 10:57:46.988172] FullCache: Cache mode : wt
00:19:58.098  [2024-12-15 10:57:46.998110] FullCache: Super block config offset : 0 kiB
00:19:58.098  [2024-12-15 10:57:46.998136] FullCache: Super block config size : 2200 B
00:19:58.098  [2024-12-15 10:57:46.998143] FullCache: Super block runtime offset : 128 kiB
00:19:58.098  [2024-12-15 10:57:46.998149] FullCache: Super block runtime size : 4 B
00:19:58.098  [2024-12-15 10:57:46.998156] FullCache: Reserved offset : 256 kiB
00:19:58.098  [2024-12-15 10:57:46.998163] FullCache: Reserved size : 128 kiB
00:19:58.098  [2024-12-15 10:57:46.998169] FullCache: Part config offset : 384 kiB
00:19:58.098  [2024-12-15 10:57:46.998175] FullCache: Part config size : 48 kiB
00:19:58.098  [2024-12-15 10:57:46.998182] FullCache: Part runtime offset : 640 kiB
00:19:58.098  [2024-12-15 10:57:46.998189] FullCache: Part runtime size : 72 kiB
00:19:58.098  [2024-12-15 10:57:46.998195] FullCache: Core config offset : 768 kiB
00:19:58.098  [2024-12-15 10:57:46.998201] FullCache: Core config size : 512 kiB
00:19:58.098  [2024-12-15 10:57:46.998208] FullCache: Core runtime offset : 1792 kiB
00:19:58.098  [2024-12-15 10:57:46.998214] FullCache: Core runtime size : 1172 kiB
00:19:58.098  [2024-12-15 10:57:46.998220] FullCache: Core UUID offset : 3072 kiB
00:19:58.098  [2024-12-15 10:57:46.998227] FullCache: Core UUID size : 16384 kiB
00:19:58.098  [2024-12-15 10:57:46.998233] FullCache: Cleaning offset : 35840 kiB
00:19:58.098  [2024-12-15 10:57:46.998240] FullCache: Cleaning size : 196 kiB
00:19:58.098  [2024-12-15 10:57:46.998246] FullCache: LRU list offset : 36096 kiB
00:19:58.098  [2024-12-15 10:57:46.998252] FullCache: LRU list size : 148 kiB
00:19:58.098  [2024-12-15 10:57:46.998259] FullCache: Collision offset : 36352 kiB
00:19:58.098  [2024-12-15 10:57:46.998272] FullCache: Collision size : 196 kiB
00:19:58.098  [2024-12-15 10:57:46.998278] FullCache: List info offset : 36608 kiB
00:19:58.098  [2024-12-15 10:57:46.998285] FullCache: List info size : 148 kiB
00:19:58.098  [2024-12-15 10:57:46.998291] FullCache: Hash offset : 36864 kiB
00:19:58.098  [2024-12-15 10:57:46.998298] FullCache: Hash size : 20 kiB
00:19:58.098  [2024-12-15 10:57:46.998305] FullCache: Cache line size: 4 kiB
00:19:58.098  [2024-12-15 10:57:46.998313] FullCache: Metadata capacity: 18 MiB
00:19:58.098  [2024-12-15 10:57:47.007683] FullCache: Policy 'always' initialized successfully
00:19:58.356  [2024-12-15 10:57:47.120671] FullCache: Done saving cache state!
00:19:58.356  [2024-12-15 10:57:47.151564] FullCache: Cache attached
00:19:58.356  [2024-12-15 10:57:47.151661] FullCache: Successfully attached
00:19:58.356  [2024-12-15 10:57:47.151932] FullCache: Inserting core Malloc1
00:19:58.356  [2024-12-15 10:57:47.151953] FullCache.Malloc1: Seqential cutoff init
00:19:58.356  [2024-12-15 10:57:47.182637] FullCache.Malloc1: Successfully added
00:19:58.356  FullCache
00:19:58.356   10:57:47	-- management/create-destruct.sh@51 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs FullCache
00:19:58.356   10:57:47	-- management/create-destruct.sh@51 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:19:58.614  true
00:19:58.614   10:57:47	-- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc0
00:19:58.614    10:57:47	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:19:58.614    10:57:47	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:19:58.872   10:57:47	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:19:58.872   10:57:47	-- management/create-destruct.sh@14 -- # return 0
00:19:58.872   10:57:47	-- management/create-destruct.sh@54 -- # bdev_check_claimed Malloc1
00:19:58.872    10:57:47	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:19:58.872    10:57:47	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:19:59.131   10:57:47	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:19:59.131   10:57:47	-- management/create-destruct.sh@14 -- # return 0
00:19:59.131   10:57:47	-- management/create-destruct.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete FullCache
00:19:59.706  [2024-12-15 10:57:48.476559] FullCache: Flushing cache
00:19:59.706  [2024-12-15 10:57:48.476593] FullCache: Flushing cache completed
00:19:59.706  [2024-12-15 10:57:48.477584] FullCache.Malloc1: Removing core
00:19:59.706  [2024-12-15 10:57:48.510106] FullCache: Core Malloc1 successfully removed
00:19:59.706  [2024-12-15 10:57:48.510156] FullCache: Stopping cache
00:19:59.707  [2024-12-15 10:57:48.617049] FullCache: Done saving cache state!
00:19:59.707  [2024-12-15 10:57:48.634801] Cache FullCache successfully stopped
00:19:59.707   10:57:48	-- management/create-destruct.sh@60 -- # bdev_check_claimed Malloc0
00:19:59.707    10:57:48	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:19:59.707    10:57:48	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:19:59.965   10:57:48	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:19:59.965   10:57:48	-- management/create-destruct.sh@16 -- # return 1
00:19:59.965   10:57:48	-- management/create-destruct.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create HotCache wt Malloc0 Malloc1
00:20:00.223  [2024-12-15 10:57:49.165425] Inserting cache HotCache
00:20:00.223  [2024-12-15 10:57:49.165848] HotCache: Metadata initialized
00:20:00.223  [2024-12-15 10:57:49.166282] HotCache: Successfully added
00:20:00.223  [2024-12-15 10:57:49.166289] HotCache: Cache mode : wt
00:20:00.223  [2024-12-15 10:57:49.176170] HotCache: Super block config offset : 0 kiB
00:20:00.223  [2024-12-15 10:57:49.176193] HotCache: Super block config size : 2200 B
00:20:00.223  [2024-12-15 10:57:49.176200] HotCache: Super block runtime offset : 128 kiB
00:20:00.223  [2024-12-15 10:57:49.176207] HotCache: Super block runtime size : 4 B
00:20:00.223  [2024-12-15 10:57:49.176214] HotCache: Reserved offset : 256 kiB
00:20:00.223  [2024-12-15 10:57:49.176220] HotCache: Reserved size : 128 kiB
00:20:00.223  [2024-12-15 10:57:49.176227] HotCache: Part config offset : 384 kiB
00:20:00.223  [2024-12-15 10:57:49.176233] HotCache: Part config size : 48 kiB
00:20:00.223  [2024-12-15 10:57:49.176240] HotCache: Part runtime offset : 640 kiB
00:20:00.223  [2024-12-15 10:57:49.176246] HotCache: Part runtime size : 72 kiB
00:20:00.223  [2024-12-15 10:57:49.176259] HotCache: Core config offset : 768 kiB
00:20:00.223  [2024-12-15 10:57:49.176266] HotCache: Core config size : 512 kiB
00:20:00.223  [2024-12-15 10:57:49.176272] HotCache: Core runtime offset : 1792 kiB
00:20:00.223  [2024-12-15 10:57:49.176279] HotCache: Core runtime size : 1172 kiB
00:20:00.223  [2024-12-15 10:57:49.176285] HotCache: Core UUID offset : 3072 kiB
00:20:00.223  [2024-12-15 10:57:49.176292] HotCache: Core UUID size : 16384 kiB
00:20:00.223  [2024-12-15 10:57:49.176298] HotCache: Cleaning offset : 35840 kiB
00:20:00.223  [2024-12-15 10:57:49.176304] HotCache: Cleaning size : 196 kiB
00:20:00.223  [2024-12-15 10:57:49.176311] HotCache: LRU list offset : 36096 kiB
00:20:00.223  [2024-12-15 10:57:49.176317] HotCache: LRU list size : 148 kiB
00:20:00.223  [2024-12-15 10:57:49.176323] HotCache: Collision offset : 36352 kiB
00:20:00.223  [2024-12-15 10:57:49.176330] HotCache: Collision size : 196 kiB
00:20:00.223  [2024-12-15 10:57:49.176336] HotCache: List info offset : 36608 kiB
00:20:00.223  [2024-12-15 10:57:49.176342] HotCache: List info size : 148 kiB
00:20:00.223  [2024-12-15 10:57:49.176349] HotCache: Hash offset : 36864 kiB
00:20:00.223  [2024-12-15 10:57:49.176355] HotCache: Hash size : 20 kiB
00:20:00.223  [2024-12-15 10:57:49.176362] HotCache: Cache line size: 4 kiB
00:20:00.223  [2024-12-15 10:57:49.176371] HotCache: Metadata capacity: 18 MiB
00:20:00.223  [2024-12-15 10:57:49.185830] HotCache: Policy 'always' initialized successfully
00:20:00.481  [2024-12-15 10:57:49.298899] HotCache: Done saving cache state!
00:20:00.481  [2024-12-15 10:57:49.330253] HotCache: Cache attached
00:20:00.481  [2024-12-15 10:57:49.330349] HotCache: Successfully attached
00:20:00.481  [2024-12-15 10:57:49.330614] HotCache: Inserting core Malloc1
00:20:00.481  [2024-12-15 10:57:49.330645] HotCache.Malloc1: Seqential cutoff init
00:20:00.481  [2024-12-15 10:57:49.361983] HotCache.Malloc1: Successfully added
00:20:00.481  HotCache
00:20:00.481   10:57:49	-- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc0
00:20:00.482    10:57:49	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc0
00:20:00.482    10:57:49	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:00.739   10:57:49	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:00.739   10:57:49	-- management/create-destruct.sh@14 -- # return 0
00:20:00.739   10:57:49	-- management/create-destruct.sh@67 -- # bdev_check_claimed Malloc1
00:20:00.739    10:57:49	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:20:00.739    10:57:49	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:00.997   10:57:49	-- management/create-destruct.sh@13 -- # '[' true = true ']'
00:20:00.997   10:57:49	-- management/create-destruct.sh@14 -- # return 0
00:20:00.997   10:57:49	-- management/create-destruct.sh@72 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:01.259  [2024-12-15 10:57:50.126409] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'HotCache' because its cache device 'Malloc0' was removed
00:20:01.259  [2024-12-15 10:57:50.126693] HotCache: Flushing cache
00:20:01.259  [2024-12-15 10:57:50.126714] HotCache: Flushing cache completed
00:20:01.259  [2024-12-15 10:57:50.126799] HotCache: Stopping cache
00:20:01.259  [2024-12-15 10:57:50.234950] HotCache: Done saving cache state!
00:20:01.259  [2024-12-15 10:57:50.252817] Cache HotCache successfully stopped
00:20:01.519   10:57:50	-- management/create-destruct.sh@74 -- # bdev_check_claimed Malloc1
00:20:01.519    10:57:50	-- management/create-destruct.sh@13 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Malloc1
00:20:01.519    10:57:50	-- management/create-destruct.sh@13 -- # jq '.[0].claimed'
00:20:01.779   10:57:50	-- management/create-destruct.sh@13 -- # '[' false = true ']'
00:20:01.779   10:57:50	-- management/create-destruct.sh@16 -- # return 1
00:20:01.779    10:57:50	-- management/create-destruct.sh@79 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs
00:20:02.039   10:57:50	-- management/create-destruct.sh@79 -- # status='[
00:20:02.039    {
00:20:02.039      "name": "Malloc1",
00:20:02.039      "aliases": [
00:20:02.039        "0d1d0a6e-e9f1-4a34-9b03-22aa209f4996"
00:20:02.039      ],
00:20:02.039      "product_name": "Malloc disk",
00:20:02.039      "block_size": 512,
00:20:02.039      "num_blocks": 206848,
00:20:02.039      "uuid": "0d1d0a6e-e9f1-4a34-9b03-22aa209f4996",
00:20:02.039      "assigned_rate_limits": {
00:20:02.039        "rw_ios_per_sec": 0,
00:20:02.039        "rw_mbytes_per_sec": 0,
00:20:02.039        "r_mbytes_per_sec": 0,
00:20:02.039        "w_mbytes_per_sec": 0
00:20:02.039      },
00:20:02.039      "claimed": false,
00:20:02.039      "zoned": false,
00:20:02.039      "supported_io_types": {
00:20:02.039        "read": true,
00:20:02.039        "write": true,
00:20:02.039        "unmap": true,
00:20:02.039        "write_zeroes": true,
00:20:02.039        "flush": true,
00:20:02.039        "reset": true,
00:20:02.039        "compare": false,
00:20:02.039        "compare_and_write": false,
00:20:02.039        "abort": true,
00:20:02.039        "nvme_admin": false,
00:20:02.039        "nvme_io": false
00:20:02.039      },
00:20:02.039      "memory_domains": [
00:20:02.039        {
00:20:02.039          "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:20:02.039          "dma_device_type": 2
00:20:02.039        }
00:20:02.039      ],
00:20:02.039      "driver_specific": {}
00:20:02.039    }
00:20:02.039  ]'
00:20:02.039    10:57:50	-- management/create-destruct.sh@80 -- # echo '[' '{' '"name":' '"Malloc1",' '"aliases":' '[' '"0d1d0a6e-e9f1-4a34-9b03-22aa209f4996"' '],' '"product_name":' '"Malloc' 'disk",' '"block_size":' 512, '"num_blocks":' 206848, '"uuid":' '"0d1d0a6e-e9f1-4a34-9b03-22aa209f4996",' '"assigned_rate_limits":' '{' '"rw_ios_per_sec":' 0, '"rw_mbytes_per_sec":' 0, '"r_mbytes_per_sec":' 0, '"w_mbytes_per_sec":' 0 '},' '"claimed":' false, '"zoned":' false, '"supported_io_types":' '{' '"read":' true, '"write":' true, '"unmap":' true, '"write_zeroes":' true, '"flush":' true, '"reset":' true, '"compare":' false, '"compare_and_write":' false, '"abort":' true, '"nvme_admin":' false, '"nvme_io":' false '},' '"memory_domains":' '[' '{' '"dma_device_id":' '"SPDK_ACCEL_DMA_DEVICE",' '"dma_device_type":' 2 '}' '],' '"driver_specific":' '{}' '}' ']'
00:20:02.039    10:57:50	-- management/create-destruct.sh@80 -- # jq 'map(select(.name == "HotCache")) == []'
00:20:02.039   10:57:50	-- management/create-destruct.sh@80 -- # gone=true
00:20:02.039   10:57:50	-- management/create-destruct.sh@81 -- # [[ true == false ]]
00:20:02.039   10:57:50	-- management/create-destruct.sh@87 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create PartCache wt NonExisting Malloc1
00:20:02.299  [2024-12-15 10:57:51.101205] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'PartCache' is waiting for cache device 'NonExisting' to connect
00:20:02.299  PartCache
00:20:02.299   10:57:51	-- management/create-destruct.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:20:02.299   10:57:51	-- management/create-destruct.sh@91 -- # killprocess 2205688
00:20:02.299   10:57:51	-- common/autotest_common.sh@936 -- # '[' -z 2205688 ']'
00:20:02.299   10:57:51	-- common/autotest_common.sh@940 -- # kill -0 2205688
00:20:02.299    10:57:51	-- common/autotest_common.sh@941 -- # uname
00:20:02.299   10:57:51	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:02.299    10:57:51	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2205688
00:20:02.299   10:57:51	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:02.299   10:57:51	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:02.299   10:57:51	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2205688'
00:20:02.299  killing process with pid 2205688
00:20:02.299   10:57:51	-- common/autotest_common.sh@955 -- # kill 2205688
00:20:02.299   10:57:51	-- common/autotest_common.sh@960 -- # wait 2205688
00:20:02.559  [2024-12-15 10:57:51.351357] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc1'!
00:20:02.559  [2024-12-15 10:57:51.351460] vbdev_ocf.c:1361:hotremove_cb: *NOTICE*: Deinitializing 'PartCache' because its core device 'Malloc1' was removed
00:20:02.819  
00:20:02.819  real	0m9.357s
00:20:02.819  user	0m15.183s
00:20:02.819  sys	0m1.680s
00:20:02.819   10:57:51	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:02.819   10:57:51	-- common/autotest_common.sh@10 -- # set +x
00:20:02.819  ************************************
00:20:02.819  END TEST ocf_create_destruct
00:20:02.819  ************************************
00:20:02.819   10:57:51	-- ocf/ocf.sh@16 -- # run_test ocf_multicore /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh
00:20:02.819   10:57:51	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:02.819   10:57:51	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:02.819   10:57:51	-- common/autotest_common.sh@10 -- # set +x
00:20:02.819  ************************************
00:20:02.819  START TEST ocf_multicore
00:20:02.819  ************************************
00:20:02.819   10:57:51	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/multicore.sh
00:20:03.079    10:57:51	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:03.079     10:57:51	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:03.079     10:57:51	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:03.079    10:57:51	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:03.079    10:57:51	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:03.079    10:57:51	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:03.079    10:57:51	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:03.079    10:57:51	-- scripts/common.sh@335 -- # IFS=.-:
00:20:03.079    10:57:51	-- scripts/common.sh@335 -- # read -ra ver1
00:20:03.079    10:57:51	-- scripts/common.sh@336 -- # IFS=.-:
00:20:03.079    10:57:51	-- scripts/common.sh@336 -- # read -ra ver2
00:20:03.079    10:57:51	-- scripts/common.sh@337 -- # local 'op=<'
00:20:03.079    10:57:51	-- scripts/common.sh@339 -- # ver1_l=2
00:20:03.079    10:57:51	-- scripts/common.sh@340 -- # ver2_l=1
00:20:03.079    10:57:51	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:03.079    10:57:51	-- scripts/common.sh@343 -- # case "$op" in
00:20:03.079    10:57:51	-- scripts/common.sh@344 -- # : 1
00:20:03.079    10:57:51	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:03.079    10:57:51	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:03.079     10:57:51	-- scripts/common.sh@364 -- # decimal 1
00:20:03.079     10:57:51	-- scripts/common.sh@352 -- # local d=1
00:20:03.079     10:57:51	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:03.079     10:57:51	-- scripts/common.sh@354 -- # echo 1
00:20:03.079    10:57:51	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:03.079     10:57:51	-- scripts/common.sh@365 -- # decimal 2
00:20:03.079     10:57:51	-- scripts/common.sh@352 -- # local d=2
00:20:03.079     10:57:51	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:03.079     10:57:51	-- scripts/common.sh@354 -- # echo 2
00:20:03.079    10:57:51	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:03.079    10:57:51	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:03.079    10:57:51	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:03.079    10:57:51	-- scripts/common.sh@367 -- # return 0
00:20:03.079    10:57:51	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:03.079    10:57:51	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    10:57:51	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    10:57:51	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079    10:57:51	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:03.079  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:03.079  		--rc genhtml_branch_coverage=1
00:20:03.079  		--rc genhtml_function_coverage=1
00:20:03.079  		--rc genhtml_legend=1
00:20:03.079  		--rc geninfo_all_blocks=1
00:20:03.079  		--rc geninfo_unexecuted_blocks=1
00:20:03.079  		
00:20:03.079  		'
00:20:03.079   10:57:51	-- management/multicore.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:20:03.079   10:57:51	-- management/multicore.sh@12 -- # spdk_pid='?'
00:20:03.079   10:57:51	-- management/multicore.sh@24 -- # start_spdk
00:20:03.079   10:57:51	-- management/multicore.sh@15 -- # spdk_pid=2207022
00:20:03.079   10:57:51	-- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:20:03.079   10:57:51	-- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:20:03.079   10:57:51	-- management/multicore.sh@17 -- # waitforlisten 2207022
00:20:03.079   10:57:51	-- common/autotest_common.sh@829 -- # '[' -z 2207022 ']'
00:20:03.079   10:57:51	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:03.079   10:57:51	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:03.079   10:57:51	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:03.079  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:03.079   10:57:51	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:03.079   10:57:51	-- common/autotest_common.sh@10 -- # set +x
00:20:03.079  [2024-12-15 10:57:52.040244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:03.079  [2024-12-15 10:57:52.040319] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207022 ]
00:20:03.079  EAL: No free 2048 kB hugepages reported on node 1
00:20:03.339  [2024-12-15 10:57:52.146631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:03.339  [2024-12-15 10:57:52.246439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:03.598  [2024-12-15 10:57:52.440956] 'OCF_Core' volume operations registered
00:20:03.598  [2024-12-15 10:57:52.444447] 'OCF_Cache' volume operations registered
00:20:03.598  [2024-12-15 10:57:52.448420] 'OCF Composite' volume operations registered
00:20:03.598  [2024-12-15 10:57:52.451941] 'SPDK_block_device' volume operations registered
00:20:04.167   10:57:52	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:04.167   10:57:52	-- common/autotest_common.sh@862 -- # return 0
00:20:04.167   10:57:53	-- management/multicore.sh@28 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core0
00:20:04.167  Core0
00:20:04.426   10:57:53	-- management/multicore.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core1
00:20:04.426  Core1
00:20:04.685   10:57:53	-- management/multicore.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Core0
00:20:04.685  [2024-12-15 10:57:53.673477] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for cache device 'Cache' to connect
00:20:04.685  C1
00:20:04.685   10:57:53	-- management/multicore.sh@32 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1
00:20:04.944  [2024-12-15 10:57:53.934198] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for cache device 'Cache' to connect
00:20:04.944  C2
00:20:05.203   10:57:53	-- management/multicore.sh@34 -- # jq -e 'any(select(.started)) == false'
00:20:05.203   10:57:53	-- management/multicore.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:05.203  true
00:20:05.463   10:57:54	-- management/multicore.sh@37 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache
00:20:05.463  [2024-12-15 10:57:54.460437] Inserting cache C1
00:20:05.463  [2024-12-15 10:57:54.460824] C1: Metadata initialized
00:20:05.463  [2024-12-15 10:57:54.461267] C1: Successfully added
00:20:05.463  [2024-12-15 10:57:54.461281] C1: Cache mode : wt
00:20:05.463  [2024-12-15 10:57:54.461360] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:20:05.463  Cache
00:20:05.463  [2024-12-15 10:57:54.471116] C1: Super block config offset : 0 kiB
00:20:05.463  [2024-12-15 10:57:54.471142] C1: Super block config size : 2200 B
00:20:05.463  [2024-12-15 10:57:54.471149] C1: Super block runtime offset : 128 kiB
00:20:05.463  [2024-12-15 10:57:54.471156] C1: Super block runtime size : 4 B
00:20:05.463  [2024-12-15 10:57:54.471162] C1: Reserved offset : 256 kiB
00:20:05.463  [2024-12-15 10:57:54.471169] C1: Reserved size : 128 kiB
00:20:05.463  [2024-12-15 10:57:54.471175] C1: Part config offset : 384 kiB
00:20:05.463  [2024-12-15 10:57:54.471182] C1: Part config size : 48 kiB
00:20:05.463  [2024-12-15 10:57:54.471188] C1: Part runtime offset : 640 kiB
00:20:05.463  [2024-12-15 10:57:54.471195] C1: Part runtime size : 72 kiB
00:20:05.463  [2024-12-15 10:57:54.471201] C1: Core config offset : 768 kiB
00:20:05.463  [2024-12-15 10:57:54.471207] C1: Core config size : 512 kiB
00:20:05.463  [2024-12-15 10:57:54.471214] C1: Core runtime offset : 1792 kiB
00:20:05.463  [2024-12-15 10:57:54.471220] C1: Core runtime size : 1172 kiB
00:20:05.463  [2024-12-15 10:57:54.471226] C1: Core UUID offset : 3072 kiB
00:20:05.463  [2024-12-15 10:57:54.471233] C1: Core UUID size : 16384 kiB
00:20:05.463  [2024-12-15 10:57:54.471239] C1: Cleaning offset : 35840 kiB
00:20:05.463  [2024-12-15 10:57:54.471246] C1: Cleaning size : 196 kiB
00:20:05.463  [2024-12-15 10:57:54.471252] C1: LRU list offset : 36096 kiB
00:20:05.463  [2024-12-15 10:57:54.471258] C1: LRU list size : 148 kiB
00:20:05.463  [2024-12-15 10:57:54.471265] C1: Collision offset : 36352 kiB
00:20:05.463  [2024-12-15 10:57:54.471271] C1: Collision size : 196 kiB
00:20:05.463  [2024-12-15 10:57:54.471277] C1: List info offset : 36608 kiB
00:20:05.463  [2024-12-15 10:57:54.471284] C1: List info size : 148 kiB
00:20:05.463  [2024-12-15 10:57:54.471290] C1: Hash offset : 36864 kiB
00:20:05.463  [2024-12-15 10:57:54.471296] C1: Hash size : 20 kiB
00:20:05.463  [2024-12-15 10:57:54.471303] C1: Cache line size: 4 kiB
00:20:05.463  [2024-12-15 10:57:54.471311] C1: Metadata capacity: 18 MiB
00:20:05.867  [2024-12-15 10:57:54.480757] C1: Policy 'always' initialized successfully
00:20:05.867   10:57:54	-- management/multicore.sh@39 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:05.867   10:57:54	-- management/multicore.sh@39 -- # jq -e 'all(select(.started)) == true'
00:20:05.867  [2024-12-15 10:57:54.593736] C1: Done saving cache state!
00:20:05.867  [2024-12-15 10:57:54.625074] C1: Cache attached
00:20:05.867  [2024-12-15 10:57:54.625171] C1: Successfully attached
00:20:05.867  [2024-12-15 10:57:54.625453] C1: Inserting core Core1
00:20:05.867  [2024-12-15 10:57:54.625476] C1.Core1: Seqential cutoff init
00:20:05.867  [2024-12-15 10:57:54.657127] C1.Core1: Successfully added
00:20:05.867  [2024-12-15 10:57:54.657903] C1: Inserting core Core0
00:20:05.867  [2024-12-15 10:57:54.657935] C1.Core0: Seqential cutoff init
00:20:05.867  true
00:20:05.867   10:57:54	-- management/multicore.sh@43 -- # waitforbdev C2
00:20:05.867   10:57:54	-- common/autotest_common.sh@897 -- # local bdev_name=C2
00:20:05.867   10:57:54	-- common/autotest_common.sh@898 -- # local bdev_timeout=
00:20:05.867   10:57:54	-- common/autotest_common.sh@899 -- # local i
00:20:05.867   10:57:54	-- common/autotest_common.sh@900 -- # [[ -z '' ]]
00:20:05.867   10:57:54	-- common/autotest_common.sh@900 -- # bdev_timeout=2000
00:20:05.867   10:57:54	-- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:20:05.867  [2024-12-15 10:57:54.690151] C1.Core0: Successfully added
00:20:06.127   10:57:54	-- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b C2 -t 2000
00:20:06.127  [
00:20:06.127    {
00:20:06.127      "name": "C2",
00:20:06.127      "aliases": [
00:20:06.127        "90788845-6fa5-5bdf-9f74-30b15a79637f"
00:20:06.127      ],
00:20:06.127      "product_name": "SPDK OCF",
00:20:06.127      "block_size": 512,
00:20:06.127      "num_blocks": 2048,
00:20:06.127      "uuid": "90788845-6fa5-5bdf-9f74-30b15a79637f",
00:20:06.127      "assigned_rate_limits": {
00:20:06.127        "rw_ios_per_sec": 0,
00:20:06.127        "rw_mbytes_per_sec": 0,
00:20:06.127        "r_mbytes_per_sec": 0,
00:20:06.127        "w_mbytes_per_sec": 0
00:20:06.127      },
00:20:06.127      "claimed": false,
00:20:06.127      "zoned": false,
00:20:06.127      "supported_io_types": {
00:20:06.127        "read": true,
00:20:06.127        "write": true,
00:20:06.127        "unmap": true,
00:20:06.127        "write_zeroes": true,
00:20:06.127        "flush": true,
00:20:06.127        "reset": false,
00:20:06.127        "compare": false,
00:20:06.127        "compare_and_write": false,
00:20:06.127        "abort": false,
00:20:06.127        "nvme_admin": false,
00:20:06.127        "nvme_io": false
00:20:06.127      },
00:20:06.127      "driver_specific": {
00:20:06.127        "cache_device": "Cache",
00:20:06.127        "core_device": "Core1",
00:20:06.127        "mode": "wt",
00:20:06.127        "cache_line_size": 4,
00:20:06.127        "metadata_volatile": false
00:20:06.127      }
00:20:06.127    }
00:20:06.127  ]
00:20:06.127   10:57:55	-- common/autotest_common.sh@905 -- # return 0
00:20:06.127   10:57:55	-- management/multicore.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete C2
00:20:06.386  [2024-12-15 10:57:55.309772] C1: Flushing cache
00:20:06.386  [2024-12-15 10:57:55.309809] C1: Flushing cache completed
00:20:06.386  [2024-12-15 10:57:55.310865] C1.Core1: Removing core
00:20:06.386  [2024-12-15 10:57:55.344289] C1: Core Core1 successfully removed
00:20:06.386  [2024-12-15 10:57:55.344347] vbdev_ocf.c: 299:stop_vbdev: *NOTICE*: Not stopping cache instance 'Cache' because it is referenced by other OCF bdev
00:20:06.386   10:57:55	-- management/multicore.sh@49 -- # jq -e '.[0] | .started'
00:20:06.386   10:57:55	-- management/multicore.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C1
00:20:06.644  true
00:20:06.644   10:57:55	-- management/multicore.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core1
00:20:06.904  [2024-12-15 10:57:55.855416] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:20:06.904  [2024-12-15 10:57:55.855682] C1: Inserting core Core1
00:20:06.904  [2024-12-15 10:57:55.855707] C1.Core1: Seqential cutoff init
00:20:06.904  [2024-12-15 10:57:55.889103] C1.Core1: Successfully added
00:20:06.904  C2
00:20:06.904   10:57:55	-- management/multicore.sh@54 -- # jq -e '.[0] | .started'
00:20:06.904   10:57:55	-- management/multicore.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs C2
00:20:07.163  true
00:20:07.163   10:57:56	-- management/multicore.sh@59 -- # stop_spdk
00:20:07.163   10:57:56	-- management/multicore.sh@20 -- # killprocess 2207022
00:20:07.163   10:57:56	-- common/autotest_common.sh@936 -- # '[' -z 2207022 ']'
00:20:07.163   10:57:56	-- common/autotest_common.sh@940 -- # kill -0 2207022
00:20:07.163    10:57:56	-- common/autotest_common.sh@941 -- # uname
00:20:07.163   10:57:56	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:07.163    10:57:56	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2207022
00:20:07.422   10:57:56	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:07.422   10:57:56	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:07.422   10:57:56	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2207022'
00:20:07.422  killing process with pid 2207022
00:20:07.422   10:57:56	-- common/autotest_common.sh@955 -- # kill 2207022
00:20:07.423   10:57:56	-- common/autotest_common.sh@960 -- # wait 2207022
00:20:07.423  [2024-12-15 10:57:56.340850] C1: Flushing cache
00:20:07.423  [2024-12-15 10:57:56.340899] C1: Flushing cache completed
00:20:07.423  [2024-12-15 10:57:56.340953] C1: Stopping cache
00:20:07.682  [2024-12-15 10:57:56.448138] C1: Done saving cache state!
00:20:07.682  [2024-12-15 10:57:56.464985] Cache C1 successfully stopped
00:20:07.941   10:57:56	-- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT
00:20:07.941   10:57:56	-- management/multicore.sh@62 -- # start_spdk
00:20:07.941   10:57:56	-- management/multicore.sh@15 -- # spdk_pid=2207726
00:20:07.941   10:57:56	-- management/multicore.sh@16 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:20:07.941   10:57:56	-- management/multicore.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:20:07.941   10:57:56	-- management/multicore.sh@17 -- # waitforlisten 2207726
00:20:07.941   10:57:56	-- common/autotest_common.sh@829 -- # '[' -z 2207726 ']'
00:20:07.941   10:57:56	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:07.941   10:57:56	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:07.941   10:57:56	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:07.941  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:07.941   10:57:56	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:07.941   10:57:56	-- common/autotest_common.sh@10 -- # set +x
00:20:07.941  [2024-12-15 10:57:56.903531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:07.941  [2024-12-15 10:57:56.903605] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2207726 ]
00:20:07.941  EAL: No free 2048 kB hugepages reported on node 1
00:20:08.199  [2024-12-15 10:57:57.009484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:08.199  [2024-12-15 10:57:57.114133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:08.457  [2024-12-15 10:57:57.317102] 'OCF_Core' volume operations registered
00:20:08.458  [2024-12-15 10:57:57.320578] 'OCF_Cache' volume operations registered
00:20:08.458  [2024-12-15 10:57:57.324506] 'OCF Composite' volume operations registered
00:20:08.458  [2024-12-15 10:57:57.328001] 'SPDK_block_device' volume operations registered
00:20:09.025   10:57:57	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:09.025   10:57:57	-- common/autotest_common.sh@862 -- # return 0
00:20:09.025   10:57:57	-- management/multicore.sh@64 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Cache
00:20:09.284  Cache
00:20:09.284   10:57:58	-- management/multicore.sh@65 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc
00:20:09.544  Malloc
00:20:09.544   10:57:58	-- management/multicore.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 1 512 -b Core
00:20:09.804  Core
00:20:09.804   10:57:58	-- management/multicore.sh@68 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Cache Malloc
00:20:10.064  [2024-12-15 10:57:58.889929] Inserting cache C1
00:20:10.064  [2024-12-15 10:57:58.890396] C1: Metadata initialized
00:20:10.064  [2024-12-15 10:57:58.890849] C1: Successfully added
00:20:10.064  [2024-12-15 10:57:58.890864] C1: Cache mode : wt
00:20:10.064  [2024-12-15 10:57:58.901708] C1: Super block config offset : 0 kiB
00:20:10.064  [2024-12-15 10:57:58.901733] C1: Super block config size : 2200 B
00:20:10.064  [2024-12-15 10:57:58.901741] C1: Super block runtime offset : 128 kiB
00:20:10.064  [2024-12-15 10:57:58.901747] C1: Super block runtime size : 4 B
00:20:10.064  [2024-12-15 10:57:58.901754] C1: Reserved offset : 256 kiB
00:20:10.064  [2024-12-15 10:57:58.901760] C1: Reserved size : 128 kiB
00:20:10.064  [2024-12-15 10:57:58.901767] C1: Part config offset : 384 kiB
00:20:10.064  [2024-12-15 10:57:58.901773] C1: Part config size : 48 kiB
00:20:10.064  [2024-12-15 10:57:58.901780] C1: Part runtime offset : 640 kiB
00:20:10.064  [2024-12-15 10:57:58.901793] C1: Part runtime size : 72 kiB
00:20:10.064  [2024-12-15 10:57:58.901799] C1: Core config offset : 768 kiB
00:20:10.064  [2024-12-15 10:57:58.901806] C1: Core config size : 512 kiB
00:20:10.064  [2024-12-15 10:57:58.901812] C1: Core runtime offset : 1792 kiB
00:20:10.064  [2024-12-15 10:57:58.901818] C1: Core runtime size : 1172 kiB
00:20:10.064  [2024-12-15 10:57:58.901825] C1: Core UUID offset : 3072 kiB
00:20:10.064  [2024-12-15 10:57:58.901831] C1: Core UUID size : 16384 kiB
00:20:10.064  [2024-12-15 10:57:58.901837] C1: Cleaning offset : 35840 kiB
00:20:10.064  [2024-12-15 10:57:58.901844] C1: Cleaning size : 196 kiB
00:20:10.064  [2024-12-15 10:57:58.901850] C1: LRU list offset : 36096 kiB
00:20:10.064  [2024-12-15 10:57:58.901856] C1: LRU list size : 148 kiB
00:20:10.064  [2024-12-15 10:57:58.901862] C1: Collision offset : 36352 kiB
00:20:10.064  [2024-12-15 10:57:58.901869] C1: Collision size : 196 kiB
00:20:10.064  [2024-12-15 10:57:58.901875] C1: List info offset : 36608 kiB
00:20:10.064  [2024-12-15 10:57:58.901881] C1: List info size : 148 kiB
00:20:10.064  [2024-12-15 10:57:58.901888] C1: Hash offset : 36864 kiB
00:20:10.064  [2024-12-15 10:57:58.901894] C1: Hash size : 20 kiB
00:20:10.064  [2024-12-15 10:57:58.901901] C1: Cache line size: 4 kiB
00:20:10.064  [2024-12-15 10:57:58.901909] C1: Metadata capacity: 18 MiB
00:20:10.064  [2024-12-15 10:57:58.912210] C1: Policy 'always' initialized successfully
00:20:10.064  [2024-12-15 10:57:59.026016] C1: Done saving cache state!
00:20:10.064  [2024-12-15 10:57:59.057685] C1: Cache attached
00:20:10.064  [2024-12-15 10:57:59.057780] C1: Successfully attached
00:20:10.064  [2024-12-15 10:57:59.058061] C1: Inserting core Malloc
00:20:10.064  [2024-12-15 10:57:59.058082] C1.Malloc: Seqential cutoff init
00:20:10.324  [2024-12-15 10:57:59.089663] C1.Malloc: Successfully added
00:20:10.324  C1
00:20:10.324   10:57:59	-- management/multicore.sh@69 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Cache Core
00:20:10.584  [2024-12-15 10:57:59.348192] vbdev_ocf.c:1085:start_cache: *NOTICE*: OCF bdev C2 connects to existing cache device Cache
00:20:10.584  [2024-12-15 10:57:59.348445] C1: Inserting core Core
00:20:10.584  [2024-12-15 10:57:59.348469] C1.Core: Seqential cutoff init
00:20:10.584  [2024-12-15 10:57:59.381947] C1.Core: Successfully added
00:20:10.584  C2
00:20:10.584   10:57:59	-- management/multicore.sh@71 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs Cache
00:20:10.584   10:57:59	-- management/multicore.sh@72 -- # jq 'length == 2'
00:20:10.844  true
00:20:10.844   10:57:59	-- management/multicore.sh@74 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Cache
00:20:11.103  [2024-12-15 10:57:59.876897] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Cache' was removed
00:20:11.103  [2024-12-15 10:57:59.876941] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Cache' was removed
00:20:11.103  [2024-12-15 10:57:59.877169] C1: Flushing cache
00:20:11.103  [2024-12-15 10:57:59.877186] C1: Flushing cache completed
00:20:11.103  [2024-12-15 10:57:59.877473] C1: Flushing cache
00:20:11.103  [2024-12-15 10:57:59.877483] C1: Flushing cache completed
00:20:11.103  [2024-12-15 10:57:59.877577] C1: Stopping cache
00:20:11.103  [2024-12-15 10:57:59.985099] C1: Done saving cache state!
00:20:11.103  [2024-12-15 10:58:00.002714] Cache C1 successfully stopped
00:20:11.103   10:58:00	-- management/multicore.sh@76 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:11.103   10:58:00	-- management/multicore.sh@76 -- # jq -e '. == []'
00:20:11.362  true
00:20:11.362   10:58:00	-- management/multicore.sh@81 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C1 wt Malloc NonExisting
00:20:11.622  [2024-12-15 10:58:00.544637] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C1' is waiting for core device 'NonExisting' to connect
00:20:11.622  C1
00:20:11.622   10:58:00	-- management/multicore.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C2 wt Malloc NonExisting
00:20:11.883  [2024-12-15 10:58:00.785323] vbdev_ocf.c:1501:vbdev_ocf_construct: *NOTICE*: OCF bdev 'C2' is waiting for core device 'NonExisting' to connect
00:20:11.883  C2
00:20:11.883   10:58:00	-- management/multicore.sh@83 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create C3 wt Malloc Core
00:20:12.143  [2024-12-15 10:58:01.038053] Inserting cache C3
00:20:12.143  [2024-12-15 10:58:01.038515] C3: Metadata initialized
00:20:12.143  [2024-12-15 10:58:01.038955] C3: Successfully added
00:20:12.143  [2024-12-15 10:58:01.038963] C3: Cache mode : wt
00:20:12.143  [2024-12-15 10:58:01.049777] C3: Super block config offset : 0 kiB
00:20:12.143  [2024-12-15 10:58:01.049814] C3: Super block config size : 2200 B
00:20:12.143  [2024-12-15 10:58:01.049822] C3: Super block runtime offset : 128 kiB
00:20:12.143  [2024-12-15 10:58:01.049828] C3: Super block runtime size : 4 B
00:20:12.143  [2024-12-15 10:58:01.049835] C3: Reserved offset : 256 kiB
00:20:12.143  [2024-12-15 10:58:01.049842] C3: Reserved size : 128 kiB
00:20:12.143  [2024-12-15 10:58:01.049848] C3: Part config offset : 384 kiB
00:20:12.143  [2024-12-15 10:58:01.049855] C3: Part config size : 48 kiB
00:20:12.143  [2024-12-15 10:58:01.049861] C3: Part runtime offset : 640 kiB
00:20:12.143  [2024-12-15 10:58:01.049867] C3: Part runtime size : 72 kiB
00:20:12.143  [2024-12-15 10:58:01.049874] C3: Core config offset : 768 kiB
00:20:12.143  [2024-12-15 10:58:01.049880] C3: Core config size : 512 kiB
00:20:12.143  [2024-12-15 10:58:01.049886] C3: Core runtime offset : 1792 kiB
00:20:12.143  [2024-12-15 10:58:01.049893] C3: Core runtime size : 1172 kiB
00:20:12.143  [2024-12-15 10:58:01.049899] C3: Core UUID offset : 3072 kiB
00:20:12.143  [2024-12-15 10:58:01.049906] C3: Core UUID size : 16384 kiB
00:20:12.143  [2024-12-15 10:58:01.049912] C3: Cleaning offset : 35840 kiB
00:20:12.143  [2024-12-15 10:58:01.049918] C3: Cleaning size : 196 kiB
00:20:12.143  [2024-12-15 10:58:01.049925] C3: LRU list offset : 36096 kiB
00:20:12.143  [2024-12-15 10:58:01.049931] C3: LRU list size : 148 kiB
00:20:12.143  [2024-12-15 10:58:01.049937] C3: Collision offset : 36352 kiB
00:20:12.143  [2024-12-15 10:58:01.049944] C3: Collision size : 196 kiB
00:20:12.143  [2024-12-15 10:58:01.049950] C3: List info offset : 36608 kiB
00:20:12.143  [2024-12-15 10:58:01.049956] C3: List info size : 148 kiB
00:20:12.143  [2024-12-15 10:58:01.049963] C3: Hash offset : 36864 kiB
00:20:12.143  [2024-12-15 10:58:01.049969] C3: Hash size : 20 kiB
00:20:12.143  [2024-12-15 10:58:01.049976] C3: Cache line size: 4 kiB
00:20:12.143  [2024-12-15 10:58:01.049984] C3: Metadata capacity: 18 MiB
00:20:12.143  [2024-12-15 10:58:01.060271] C3: Policy 'always' initialized successfully
00:20:12.403  [2024-12-15 10:58:01.173939] C3: Done saving cache state!
00:20:12.403  [2024-12-15 10:58:01.206044] C3: Cache attached
00:20:12.403  [2024-12-15 10:58:01.206139] C3: Successfully attached
00:20:12.403  [2024-12-15 10:58:01.206428] C3: Inserting core Core
00:20:12.403  [2024-12-15 10:58:01.206452] C3.Core: Seqential cutoff init
00:20:12.403  [2024-12-15 10:58:01.238002] C3.Core: Successfully added
00:20:12.403  C3
00:20:12.403   10:58:01	-- management/multicore.sh@85 -- # stop_spdk
00:20:12.403   10:58:01	-- management/multicore.sh@20 -- # killprocess 2207726
00:20:12.403   10:58:01	-- common/autotest_common.sh@936 -- # '[' -z 2207726 ']'
00:20:12.403   10:58:01	-- common/autotest_common.sh@940 -- # kill -0 2207726
00:20:12.403    10:58:01	-- common/autotest_common.sh@941 -- # uname
00:20:12.403   10:58:01	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:12.403    10:58:01	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2207726
00:20:12.403   10:58:01	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:12.403   10:58:01	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:12.403   10:58:01	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2207726'
00:20:12.403  killing process with pid 2207726
00:20:12.403   10:58:01	-- common/autotest_common.sh@955 -- # kill 2207726
00:20:12.403   10:58:01	-- common/autotest_common.sh@960 -- # wait 2207726
00:20:12.664  [2024-12-15 10:58:01.487734] C3: Flushing cache
00:20:12.664  [2024-12-15 10:58:01.487780] C3: Flushing cache completed
00:20:12.664  [2024-12-15 10:58:01.487838] C3: Stopping cache
00:20:12.664  [2024-12-15 10:58:01.595506] C3: Done saving cache state!
00:20:12.664  [2024-12-15 10:58:01.614845] Cache C3 successfully stopped
00:20:12.664  [2024-12-15 10:58:01.616466] bdev.c:2354:bdev_finish_unregister_bdevs_iter: *WARNING*: Unregistering claimed bdev 'Malloc'!
00:20:12.664  [2024-12-15 10:58:01.616519] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C1' because its cache device 'Malloc' was removed
00:20:12.664  [2024-12-15 10:58:01.616537] vbdev_ocf.c:1372:hotremove_cb: *NOTICE*: Deinitializing 'C2' because its cache device 'Malloc' was removed
00:20:13.232   10:58:02	-- management/multicore.sh@21 -- # trap - SIGINT SIGTERM EXIT
00:20:13.232  
00:20:13.232  real	0m10.191s
00:20:13.232  user	0m14.975s
00:20:13.232  sys	0m2.087s
00:20:13.232   10:58:02	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:13.232   10:58:02	-- common/autotest_common.sh@10 -- # set +x
00:20:13.232  ************************************
00:20:13.232  END TEST ocf_multicore
00:20:13.232  ************************************
00:20:13.232   10:58:02	-- ocf/ocf.sh@17 -- # run_test ocf_remove /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh
00:20:13.232   10:58:02	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:13.232   10:58:02	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:13.232   10:58:02	-- common/autotest_common.sh@10 -- # set +x
00:20:13.232  ************************************
00:20:13.232  START TEST ocf_remove
00:20:13.232  ************************************
00:20:13.232   10:58:02	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/remove.sh
00:20:13.232    10:58:02	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:13.232     10:58:02	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:13.232     10:58:02	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:13.232    10:58:02	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:13.232    10:58:02	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:13.233    10:58:02	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:13.233    10:58:02	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:13.233    10:58:02	-- scripts/common.sh@335 -- # IFS=.-:
00:20:13.233    10:58:02	-- scripts/common.sh@335 -- # read -ra ver1
00:20:13.233    10:58:02	-- scripts/common.sh@336 -- # IFS=.-:
00:20:13.233    10:58:02	-- scripts/common.sh@336 -- # read -ra ver2
00:20:13.233    10:58:02	-- scripts/common.sh@337 -- # local 'op=<'
00:20:13.233    10:58:02	-- scripts/common.sh@339 -- # ver1_l=2
00:20:13.233    10:58:02	-- scripts/common.sh@340 -- # ver2_l=1
00:20:13.233    10:58:02	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:13.233    10:58:02	-- scripts/common.sh@343 -- # case "$op" in
00:20:13.233    10:58:02	-- scripts/common.sh@344 -- # : 1
00:20:13.233    10:58:02	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:13.233    10:58:02	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:13.233     10:58:02	-- scripts/common.sh@364 -- # decimal 1
00:20:13.233     10:58:02	-- scripts/common.sh@352 -- # local d=1
00:20:13.233     10:58:02	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:13.233     10:58:02	-- scripts/common.sh@354 -- # echo 1
00:20:13.233    10:58:02	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:13.233     10:58:02	-- scripts/common.sh@365 -- # decimal 2
00:20:13.233     10:58:02	-- scripts/common.sh@352 -- # local d=2
00:20:13.233     10:58:02	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:13.233     10:58:02	-- scripts/common.sh@354 -- # echo 2
00:20:13.233    10:58:02	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:13.233    10:58:02	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:13.233    10:58:02	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:13.233    10:58:02	-- scripts/common.sh@367 -- # return 0
00:20:13.233    10:58:02	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:13.233    10:58:02	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:13.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:13.233  		--rc genhtml_branch_coverage=1
00:20:13.233  		--rc genhtml_function_coverage=1
00:20:13.233  		--rc genhtml_legend=1
00:20:13.233  		--rc geninfo_all_blocks=1
00:20:13.233  		--rc geninfo_unexecuted_blocks=1
00:20:13.233  		
00:20:13.233  		'
00:20:13.233    10:58:02	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:13.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:13.233  		--rc genhtml_branch_coverage=1
00:20:13.233  		--rc genhtml_function_coverage=1
00:20:13.233  		--rc genhtml_legend=1
00:20:13.233  		--rc geninfo_all_blocks=1
00:20:13.233  		--rc geninfo_unexecuted_blocks=1
00:20:13.233  		
00:20:13.233  		'
00:20:13.233    10:58:02	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:13.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:13.233  		--rc genhtml_branch_coverage=1
00:20:13.233  		--rc genhtml_function_coverage=1
00:20:13.233  		--rc genhtml_legend=1
00:20:13.233  		--rc geninfo_all_blocks=1
00:20:13.233  		--rc geninfo_unexecuted_blocks=1
00:20:13.233  		
00:20:13.233  		'
00:20:13.233    10:58:02	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:13.233  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:13.233  		--rc genhtml_branch_coverage=1
00:20:13.233  		--rc genhtml_function_coverage=1
00:20:13.233  		--rc genhtml_legend=1
00:20:13.233  		--rc geninfo_all_blocks=1
00:20:13.233  		--rc geninfo_unexecuted_blocks=1
00:20:13.233  		
00:20:13.233  		'
00:20:13.233   10:58:02	-- management/remove.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:20:13.233   10:58:02	-- management/remove.sh@12 -- # rm -f
00:20:13.233   10:58:02	-- management/remove.sh@13 -- # truncate -s 128M aio0
00:20:13.233   10:58:02	-- management/remove.sh@14 -- # truncate -s 128M aio1
00:20:13.233   10:58:02	-- management/remove.sh@16 -- # jq .
00:20:13.493   10:58:02	-- management/remove.sh@48 -- # spdk_pid=2208472
00:20:13.493   10:58:02	-- management/remove.sh@50 -- # waitforlisten 2208472
00:20:13.493   10:58:02	-- management/remove.sh@47 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config
00:20:13.493   10:58:02	-- common/autotest_common.sh@829 -- # '[' -z 2208472 ']'
00:20:13.493   10:58:02	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:13.493   10:58:02	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:13.493   10:58:02	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:13.493  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:13.493   10:58:02	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:13.493   10:58:02	-- common/autotest_common.sh@10 -- # set +x
00:20:13.493  [2024-12-15 10:58:02.303810] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:13.493  [2024-12-15 10:58:02.303887] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208472 ]
00:20:13.493  EAL: No free 2048 kB hugepages reported on node 1
00:20:13.493  [2024-12-15 10:58:02.409012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:13.493  [2024-12-15 10:58:02.503022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:13.752  [2024-12-15 10:58:02.694628] 'OCF_Core' volume operations registered
00:20:13.752  [2024-12-15 10:58:02.698107] 'OCF_Cache' volume operations registered
00:20:13.752  [2024-12-15 10:58:02.702056] 'OCF Composite' volume operations registered
00:20:13.752  [2024-12-15 10:58:02.705546] 'SPDK_block_device' volume operations registered
00:20:14.322   10:58:03	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:14.322   10:58:03	-- common/autotest_common.sh@862 -- # return 0
00:20:14.322   10:58:03	-- management/remove.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create ocfWT wt aio0 aio1
00:20:14.581  [2024-12-15 10:58:03.499299] vbdev_ocf.c:1497:vbdev_ocf_construct: *NOTICE*: OCF bdev 'ocfWT' is waiting for cache device 'aio0' to connect
00:20:14.581  ocfWT
00:20:14.581   10:58:03	-- management/remove.sh@58 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:14.581   10:58:03	-- management/remove.sh@58 -- # jq -r '.[] .name'
00:20:14.581   10:58:03	-- management/remove.sh@58 -- # grep -qw ocfWT
00:20:14.841   10:58:03	-- management/remove.sh@62 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete ocfWT
00:20:15.100    10:58:04	-- management/remove.sh@66 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:15.100    10:58:04	-- management/remove.sh@66 -- # jq -r '.[] | select(.name == "ocfWT") | .name'
00:20:15.358   10:58:04	-- management/remove.sh@66 -- # [[ -z '' ]]
00:20:15.358   10:58:04	-- management/remove.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:20:15.358   10:58:04	-- management/remove.sh@70 -- # killprocess 2208472
00:20:15.358   10:58:04	-- common/autotest_common.sh@936 -- # '[' -z 2208472 ']'
00:20:15.358   10:58:04	-- common/autotest_common.sh@940 -- # kill -0 2208472
00:20:15.358    10:58:04	-- common/autotest_common.sh@941 -- # uname
00:20:15.359   10:58:04	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:15.359    10:58:04	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2208472
00:20:15.359   10:58:04	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:15.359   10:58:04	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:15.359   10:58:04	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2208472'
00:20:15.359  killing process with pid 2208472
00:20:15.359   10:58:04	-- common/autotest_common.sh@955 -- # kill 2208472
00:20:15.359   10:58:04	-- common/autotest_common.sh@960 -- # wait 2208472
00:20:15.928   10:58:04	-- management/remove.sh@74 -- # spdk_pid=2208839
00:20:15.928   10:58:04	-- management/remove.sh@76 -- # trap 'killprocess $spdk_pid; rm -f aio* $curdir/config ocf_bdevs ocf_bdevs_verify; exit 1' SIGINT SIGTERM EXIT
00:20:15.928   10:58:04	-- management/remove.sh@73 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt --json /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config
00:20:15.928   10:58:04	-- management/remove.sh@78 -- # waitforlisten 2208839
00:20:15.928   10:58:04	-- common/autotest_common.sh@829 -- # '[' -z 2208839 ']'
00:20:15.928   10:58:04	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:15.928   10:58:04	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:15.928   10:58:04	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:15.928  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:15.928   10:58:04	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:15.928   10:58:04	-- common/autotest_common.sh@10 -- # set +x
00:20:16.190  [2024-12-15 10:58:04.980787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:16.190  [2024-12-15 10:58:04.980867] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2208839 ]
00:20:16.190  EAL: No free 2048 kB hugepages reported on node 1
00:20:16.190  [2024-12-15 10:58:05.085521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:16.190  [2024-12-15 10:58:05.183106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:16.449  [2024-12-15 10:58:05.364274] 'OCF_Core' volume operations registered
00:20:16.449  [2024-12-15 10:58:05.367468] 'OCF_Cache' volume operations registered
00:20:16.449  [2024-12-15 10:58:05.371082] 'OCF Composite' volume operations registered
00:20:16.449  [2024-12-15 10:58:05.374286] 'SPDK_block_device' volume operations registered
00:20:17.018   10:58:05	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:17.018   10:58:05	-- common/autotest_common.sh@862 -- # return 0
00:20:17.018    10:58:05	-- management/remove.sh@82 -- # jq -r '.[] | select(name == "ocfWT") | .name'
00:20:17.018    10:58:05	-- management/remove.sh@82 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:17.018  jq: error: name/0 is not defined at <top-level>, line 1:
00:20:17.018  .[] | select(name == "ocfWT") | .name             
00:20:17.018  jq: 1 compile error
00:20:17.277  Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
00:20:17.277  BrokenPipeError: [Errno 32] Broken pipe
00:20:17.277     10:58:06	-- management/remove.sh@82 -- # trap - ERR
00:20:17.277     10:58:06	-- management/remove.sh@82 -- # print_backtrace
00:20:17.277     10:58:06	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:20:17.277     10:58:06	-- common/autotest_common.sh@1142 -- # return 0
00:20:17.277   10:58:06	-- management/remove.sh@82 -- # [[ -z '' ]]
00:20:17.277   10:58:06	-- management/remove.sh@84 -- # trap - SIGINT SIGTERM EXIT
00:20:17.277   10:58:06	-- management/remove.sh@86 -- # killprocess 2208839
00:20:17.277   10:58:06	-- common/autotest_common.sh@936 -- # '[' -z 2208839 ']'
00:20:17.277   10:58:06	-- common/autotest_common.sh@940 -- # kill -0 2208839
00:20:17.277    10:58:06	-- common/autotest_common.sh@941 -- # uname
00:20:17.277   10:58:06	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:17.277    10:58:06	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2208839
00:20:17.277   10:58:06	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:17.277   10:58:06	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:17.277   10:58:06	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2208839'
00:20:17.277  killing process with pid 2208839
00:20:17.277   10:58:06	-- common/autotest_common.sh@955 -- # kill 2208839
00:20:17.277   10:58:06	-- common/autotest_common.sh@960 -- # wait 2208839
00:20:17.846   10:58:06	-- management/remove.sh@87 -- # rm -f aio0 aio1 /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/config ocf_bdevs ocf_bdevs_verify
00:20:17.846  
00:20:17.846  real	0m4.765s
00:20:17.846  user	0m5.802s
00:20:17.846  sys	0m1.291s
00:20:17.846   10:58:06	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:17.846   10:58:06	-- common/autotest_common.sh@10 -- # set +x
00:20:17.846  ************************************
00:20:17.846  END TEST ocf_remove
00:20:17.846  ************************************
00:20:18.106   10:58:06	-- ocf/ocf.sh@18 -- # run_test ocf_configuration_change /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh
00:20:18.106   10:58:06	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:18.106   10:58:06	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:18.106   10:58:06	-- common/autotest_common.sh@10 -- # set +x
00:20:18.106  ************************************
00:20:18.106  START TEST ocf_configuration_change
00:20:18.106  ************************************
00:20:18.106   10:58:06	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/ocf/management/configuration-change.sh
00:20:18.106    10:58:06	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:18.106     10:58:06	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:18.106     10:58:06	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:18.106    10:58:07	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:18.106    10:58:07	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:18.106    10:58:07	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:18.106    10:58:07	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:18.106    10:58:07	-- scripts/common.sh@335 -- # IFS=.-:
00:20:18.106    10:58:07	-- scripts/common.sh@335 -- # read -ra ver1
00:20:18.106    10:58:07	-- scripts/common.sh@336 -- # IFS=.-:
00:20:18.106    10:58:07	-- scripts/common.sh@336 -- # read -ra ver2
00:20:18.106    10:58:07	-- scripts/common.sh@337 -- # local 'op=<'
00:20:18.106    10:58:07	-- scripts/common.sh@339 -- # ver1_l=2
00:20:18.106    10:58:07	-- scripts/common.sh@340 -- # ver2_l=1
00:20:18.106    10:58:07	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:18.106    10:58:07	-- scripts/common.sh@343 -- # case "$op" in
00:20:18.106    10:58:07	-- scripts/common.sh@344 -- # : 1
00:20:18.106    10:58:07	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:18.106    10:58:07	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:18.106     10:58:07	-- scripts/common.sh@364 -- # decimal 1
00:20:18.106     10:58:07	-- scripts/common.sh@352 -- # local d=1
00:20:18.106     10:58:07	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:18.106     10:58:07	-- scripts/common.sh@354 -- # echo 1
00:20:18.106    10:58:07	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:18.106     10:58:07	-- scripts/common.sh@365 -- # decimal 2
00:20:18.106     10:58:07	-- scripts/common.sh@352 -- # local d=2
00:20:18.106     10:58:07	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:18.106     10:58:07	-- scripts/common.sh@354 -- # echo 2
00:20:18.106    10:58:07	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:18.106    10:58:07	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:18.106    10:58:07	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:18.106    10:58:07	-- scripts/common.sh@367 -- # return 0
00:20:18.106    10:58:07	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:18.106    10:58:07	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:18.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:18.106  		--rc genhtml_branch_coverage=1
00:20:18.106  		--rc genhtml_function_coverage=1
00:20:18.106  		--rc genhtml_legend=1
00:20:18.106  		--rc geninfo_all_blocks=1
00:20:18.106  		--rc geninfo_unexecuted_blocks=1
00:20:18.106  		
00:20:18.106  		'
00:20:18.106    10:58:07	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:18.106  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:18.106  		--rc genhtml_branch_coverage=1
00:20:18.106  		--rc genhtml_function_coverage=1
00:20:18.106  		--rc genhtml_legend=1
00:20:18.106  		--rc geninfo_all_blocks=1
00:20:18.107  		--rc geninfo_unexecuted_blocks=1
00:20:18.107  		
00:20:18.107  		'
00:20:18.107    10:58:07	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:18.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:18.107  		--rc genhtml_branch_coverage=1
00:20:18.107  		--rc genhtml_function_coverage=1
00:20:18.107  		--rc genhtml_legend=1
00:20:18.107  		--rc geninfo_all_blocks=1
00:20:18.107  		--rc geninfo_unexecuted_blocks=1
00:20:18.107  		
00:20:18.107  		'
00:20:18.107    10:58:07	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:18.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:18.107  		--rc genhtml_branch_coverage=1
00:20:18.107  		--rc genhtml_function_coverage=1
00:20:18.107  		--rc genhtml_legend=1
00:20:18.107  		--rc geninfo_all_blocks=1
00:20:18.107  		--rc geninfo_unexecuted_blocks=1
00:20:18.107  		
00:20:18.107  		'
00:20:18.107   10:58:07	-- management/configuration-change.sh@10 -- # rpc_py=/var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py
00:20:18.107   10:58:07	-- management/configuration-change.sh@11 -- # cache_line_sizes=(4 8 16 32 64)
00:20:18.107   10:58:07	-- management/configuration-change.sh@12 -- # cache_modes=(wt wb pt wa wi wo)
00:20:18.107   10:58:07	-- management/configuration-change.sh@15 -- # spdk_pid=2209137
00:20:18.107   10:58:07	-- management/configuration-change.sh@14 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/iscsi_tgt
00:20:18.107   10:58:07	-- management/configuration-change.sh@17 -- # waitforlisten 2209137
00:20:18.107   10:58:07	-- common/autotest_common.sh@829 -- # '[' -z 2209137 ']'
00:20:18.107   10:58:07	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:18.107   10:58:07	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:18.107   10:58:07	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:18.107  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:18.107   10:58:07	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:18.107   10:58:07	-- common/autotest_common.sh@10 -- # set +x
00:20:18.107  [2024-12-15 10:58:07.109957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:18.107  [2024-12-15 10:58:07.110033] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2209137 ]
00:20:18.366  EAL: No free 2048 kB hugepages reported on node 1
00:20:18.366  [2024-12-15 10:58:07.214928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1
00:20:18.366  [2024-12-15 10:58:07.321604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0
00:20:18.626  [2024-12-15 10:58:07.521638] 'OCF_Core' volume operations registered
00:20:18.626  [2024-12-15 10:58:07.525126] 'OCF_Cache' volume operations registered
00:20:18.626  [2024-12-15 10:58:07.529100] 'OCF Composite' volume operations registered
00:20:18.626  [2024-12-15 10:58:07.532611] 'SPDK_block_device' volume operations registered
00:20:19.194   10:58:08	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:19.194   10:58:08	-- common/autotest_common.sh@862 -- # return 0
00:20:19.194   10:58:08	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:20:19.194   10:58:08	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:19.453  Malloc0
00:20:19.453   10:58:08	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:19.713  Malloc1
00:20:19.713   10:58:08	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 4
00:20:19.972  [2024-12-15 10:58:08.858824] Inserting cache Cache0
00:20:19.972  [2024-12-15 10:58:08.859195] Cache0: Metadata initialized
00:20:19.972  [2024-12-15 10:58:08.859645] Cache0: Successfully added
00:20:19.972  [2024-12-15 10:58:08.859660] Cache0: Cache mode : wt
00:20:19.972  [2024-12-15 10:58:08.869564] Cache0: Super block config offset : 0 kiB
00:20:19.972  [2024-12-15 10:58:08.869584] Cache0: Super block config size : 2200 B
00:20:19.972  [2024-12-15 10:58:08.869591] Cache0: Super block runtime offset : 128 kiB
00:20:19.972  [2024-12-15 10:58:08.869597] Cache0: Super block runtime size : 4 B
00:20:19.972  [2024-12-15 10:58:08.869604] Cache0: Reserved offset : 256 kiB
00:20:19.972  [2024-12-15 10:58:08.869611] Cache0: Reserved size : 128 kiB
00:20:19.972  [2024-12-15 10:58:08.869628] Cache0: Part config offset : 384 kiB
00:20:19.972  [2024-12-15 10:58:08.869635] Cache0: Part config size : 48 kiB
00:20:19.972  [2024-12-15 10:58:08.869641] Cache0: Part runtime offset : 640 kiB
00:20:19.972  [2024-12-15 10:58:08.869648] Cache0: Part runtime size : 72 kiB
00:20:19.972  [2024-12-15 10:58:08.869654] Cache0: Core config offset : 768 kiB
00:20:19.972  [2024-12-15 10:58:08.869660] Cache0: Core config size : 512 kiB
00:20:19.972  [2024-12-15 10:58:08.869667] Cache0: Core runtime offset : 1792 kiB
00:20:19.972  [2024-12-15 10:58:08.869673] Cache0: Core runtime size : 1172 kiB
00:20:19.972  [2024-12-15 10:58:08.869679] Cache0: Core UUID offset : 3072 kiB
00:20:19.972  [2024-12-15 10:58:08.869686] Cache0: Core UUID size : 16384 kiB
00:20:19.972  [2024-12-15 10:58:08.869692] Cache0: Cleaning offset : 35840 kiB
00:20:19.972  [2024-12-15 10:58:08.869698] Cache0: Cleaning size : 196 kiB
00:20:19.972  [2024-12-15 10:58:08.869705] Cache0: LRU list offset : 36096 kiB
00:20:19.972  [2024-12-15 10:58:08.869711] Cache0: LRU list size : 148 kiB
00:20:19.972  [2024-12-15 10:58:08.869717] Cache0: Collision offset : 36352 kiB
00:20:19.972  [2024-12-15 10:58:08.869723] Cache0: Collision size : 196 kiB
00:20:19.972  [2024-12-15 10:58:08.869730] Cache0: List info offset : 36608 kiB
00:20:19.972  [2024-12-15 10:58:08.869736] Cache0: List info size : 148 kiB
00:20:19.972  [2024-12-15 10:58:08.869742] Cache0: Hash offset : 36864 kiB
00:20:19.972  [2024-12-15 10:58:08.869749] Cache0: Hash size : 20 kiB
00:20:19.972  [2024-12-15 10:58:08.869756] Cache0: Cache line size: 4 kiB
00:20:19.972  [2024-12-15 10:58:08.869764] Cache0: Metadata capacity: 18 MiB
00:20:19.973  [2024-12-15 10:58:08.879355] Cache0: Policy 'always' initialized successfully
00:20:20.231  [2024-12-15 10:58:08.993322] Cache0: Done saving cache state!
00:20:20.231  [2024-12-15 10:58:09.025605] Cache0: Cache attached
00:20:20.231  [2024-12-15 10:58:09.025702] Cache0: Successfully attached
00:20:20.231  [2024-12-15 10:58:09.025985] Cache0: Inserting core Malloc1
00:20:20.231  [2024-12-15 10:58:09.026007] Cache0.Malloc1: Seqential cutoff init
00:20:20.231  [2024-12-15 10:58:09.057941] Cache0.Malloc1: Successfully added
00:20:20.231  Cache0
00:20:20.231   10:58:09	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:20.231   10:58:09	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:20.491  true
00:20:20.491   10:58:09	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:20.491   10:58:09	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 4'
00:20:20.749  true
00:20:20.749   10:58:09	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:20.749   10:58:09	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 4'
00:20:21.009  true
00:20:21.009   10:58:09	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:20:21.268  [2024-12-15 10:58:10.070999] Cache0: Flushing cache
00:20:21.268  [2024-12-15 10:58:10.071042] Cache0: Flushing cache completed
00:20:21.268  [2024-12-15 10:58:10.072024] Cache0.Malloc1: Removing core
00:20:21.268  [2024-12-15 10:58:10.104642] Cache0: Core Malloc1 successfully removed
00:20:21.268  [2024-12-15 10:58:10.104712] Cache0: Stopping cache
00:20:21.268  [2024-12-15 10:58:10.211999] Cache0: Done saving cache state!
00:20:21.268  [2024-12-15 10:58:10.230696] Cache Cache0 successfully stopped
00:20:21.268   10:58:10	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:21.838   10:58:10	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:20:22.097   10:58:11	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:20:22.097   10:58:11	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:22.356  Malloc0
00:20:22.356   10:58:11	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:22.925  Malloc1
00:20:22.925   10:58:11	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 8
00:20:23.185  [2024-12-15 10:58:12.131606] Inserting cache Cache0
00:20:23.185  [2024-12-15 10:58:12.132085] Cache0: Metadata initialized
00:20:23.185  [2024-12-15 10:58:12.132525] Cache0: Successfully added
00:20:23.185  [2024-12-15 10:58:12.132533] Cache0: Cache mode : wt
00:20:23.185  [2024-12-15 10:58:12.143289] Cache0: Super block config offset : 0 kiB
00:20:23.185  [2024-12-15 10:58:12.143316] Cache0: Super block config size : 2200 B
00:20:23.185  [2024-12-15 10:58:12.143323] Cache0: Super block runtime offset : 128 kiB
00:20:23.185  [2024-12-15 10:58:12.143330] Cache0: Super block runtime size : 4 B
00:20:23.185  [2024-12-15 10:58:12.143337] Cache0: Reserved offset : 256 kiB
00:20:23.185  [2024-12-15 10:58:12.143343] Cache0: Reserved size : 128 kiB
00:20:23.185  [2024-12-15 10:58:12.143349] Cache0: Part config offset : 384 kiB
00:20:23.185  [2024-12-15 10:58:12.143356] Cache0: Part config size : 48 kiB
00:20:23.185  [2024-12-15 10:58:12.143362] Cache0: Part runtime offset : 640 kiB
00:20:23.185  [2024-12-15 10:58:12.143368] Cache0: Part runtime size : 72 kiB
00:20:23.185  [2024-12-15 10:58:12.143375] Cache0: Core config offset : 768 kiB
00:20:23.185  [2024-12-15 10:58:12.143381] Cache0: Core config size : 512 kiB
00:20:23.185  [2024-12-15 10:58:12.143387] Cache0: Core runtime offset : 1792 kiB
00:20:23.185  [2024-12-15 10:58:12.143394] Cache0: Core runtime size : 1172 kiB
00:20:23.185  [2024-12-15 10:58:12.143400] Cache0: Core UUID offset : 3072 kiB
00:20:23.185  [2024-12-15 10:58:12.143406] Cache0: Core UUID size : 16384 kiB
00:20:23.185  [2024-12-15 10:58:12.143412] Cache0: Cleaning offset : 35840 kiB
00:20:23.185  [2024-12-15 10:58:12.143419] Cache0: Cleaning size : 100 kiB
00:20:23.185  [2024-12-15 10:58:12.143425] Cache0: LRU list offset : 35968 kiB
00:20:23.185  [2024-12-15 10:58:12.143431] Cache0: LRU list size : 76 kiB
00:20:23.185  [2024-12-15 10:58:12.143437] Cache0: Collision offset : 36096 kiB
00:20:23.185  [2024-12-15 10:58:12.143444] Cache0: Collision size : 116 kiB
00:20:23.185  [2024-12-15 10:58:12.143450] Cache0: List info offset : 36224 kiB
00:20:23.185  [2024-12-15 10:58:12.143456] Cache0: List info size : 76 kiB
00:20:23.185  [2024-12-15 10:58:12.143463] Cache0: Hash offset : 36352 kiB
00:20:23.185  [2024-12-15 10:58:12.143476] Cache0: Hash size : 12 kiB
00:20:23.185  [2024-12-15 10:58:12.143483] Cache0: Cache line size: 8 kiB
00:20:23.185  [2024-12-15 10:58:12.143491] Cache0: Metadata capacity: 18 MiB
00:20:23.185  [2024-12-15 10:58:12.153780] Cache0: Policy 'always' initialized successfully
00:20:23.444  [2024-12-15 10:58:12.252186] Cache0: Done saving cache state!
00:20:23.444  [2024-12-15 10:58:12.283644] Cache0: Cache attached
00:20:23.444  [2024-12-15 10:58:12.283739] Cache0: Successfully attached
00:20:23.444  [2024-12-15 10:58:12.284013] Cache0: Inserting core Malloc1
00:20:23.444  [2024-12-15 10:58:12.284034] Cache0.Malloc1: Seqential cutoff init
00:20:23.444  [2024-12-15 10:58:12.315443] Cache0.Malloc1: Successfully added
00:20:23.444  Cache0
00:20:23.444   10:58:12	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:23.444   10:58:12	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:23.703  true
00:20:23.703   10:58:12	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:23.703   10:58:12	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 8'
00:20:23.962  true
00:20:23.962   10:58:12	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:23.962   10:58:12	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 8'
00:20:24.220  true
00:20:24.220   10:58:13	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:20:24.789  [2024-12-15 10:58:13.577162] Cache0: Flushing cache
00:20:24.789  [2024-12-15 10:58:13.577199] Cache0: Flushing cache completed
00:20:24.789  [2024-12-15 10:58:13.577856] Cache0.Malloc1: Removing core
00:20:24.789  [2024-12-15 10:58:13.610092] Cache0: Core Malloc1 successfully removed
00:20:24.789  [2024-12-15 10:58:13.610154] Cache0: Stopping cache
00:20:24.789  [2024-12-15 10:58:13.704495] Cache0: Done saving cache state!
00:20:24.789  [2024-12-15 10:58:13.723341] Cache Cache0 successfully stopped
00:20:24.789   10:58:13	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:25.048   10:58:14	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:20:25.306   10:58:14	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:20:25.306   10:58:14	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:25.565  Malloc0
00:20:25.565   10:58:14	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:26.133  Malloc1
00:20:26.133   10:58:15	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 16
00:20:26.392  [2024-12-15 10:58:15.264706] Inserting cache Cache0
00:20:26.392  [2024-12-15 10:58:15.265124] Cache0: Metadata initialized
00:20:26.392  [2024-12-15 10:58:15.265562] Cache0: Successfully added
00:20:26.392  [2024-12-15 10:58:15.265570] Cache0: Cache mode : wt
00:20:26.392  [2024-12-15 10:58:15.275400] Cache0: Super block config offset : 0 kiB
00:20:26.392  [2024-12-15 10:58:15.275423] Cache0: Super block config size : 2200 B
00:20:26.392  [2024-12-15 10:58:15.275430] Cache0: Super block runtime offset : 128 kiB
00:20:26.392  [2024-12-15 10:58:15.275437] Cache0: Super block runtime size : 4 B
00:20:26.392  [2024-12-15 10:58:15.275444] Cache0: Reserved offset : 256 kiB
00:20:26.392  [2024-12-15 10:58:15.275450] Cache0: Reserved size : 128 kiB
00:20:26.392  [2024-12-15 10:58:15.275456] Cache0: Part config offset : 384 kiB
00:20:26.392  [2024-12-15 10:58:15.275463] Cache0: Part config size : 48 kiB
00:20:26.392  [2024-12-15 10:58:15.275469] Cache0: Part runtime offset : 640 kiB
00:20:26.392  [2024-12-15 10:58:15.275475] Cache0: Part runtime size : 72 kiB
00:20:26.392  [2024-12-15 10:58:15.275482] Cache0: Core config offset : 768 kiB
00:20:26.392  [2024-12-15 10:58:15.275488] Cache0: Core config size : 512 kiB
00:20:26.392  [2024-12-15 10:58:15.275494] Cache0: Core runtime offset : 1792 kiB
00:20:26.392  [2024-12-15 10:58:15.275501] Cache0: Core runtime size : 1172 kiB
00:20:26.392  [2024-12-15 10:58:15.275507] Cache0: Core UUID offset : 3072 kiB
00:20:26.392  [2024-12-15 10:58:15.275520] Cache0: Core UUID size : 16384 kiB
00:20:26.392  [2024-12-15 10:58:15.275526] Cache0: Cleaning offset : 35840 kiB
00:20:26.392  [2024-12-15 10:58:15.275533] Cache0: Cleaning size : 52 kiB
00:20:26.392  [2024-12-15 10:58:15.275539] Cache0: LRU list offset : 35968 kiB
00:20:26.392  [2024-12-15 10:58:15.275545] Cache0: LRU list size : 40 kiB
00:20:26.392  [2024-12-15 10:58:15.275552] Cache0: Collision offset : 36096 kiB
00:20:26.392  [2024-12-15 10:58:15.275558] Cache0: Collision size : 76 kiB
00:20:26.392  [2024-12-15 10:58:15.275564] Cache0: List info offset : 36224 kiB
00:20:26.392  [2024-12-15 10:58:15.275571] Cache0: List info size : 40 kiB
00:20:26.392  [2024-12-15 10:58:15.275577] Cache0: Hash offset : 36352 kiB
00:20:26.392  [2024-12-15 10:58:15.275583] Cache0: Hash size : 8 kiB
00:20:26.392  [2024-12-15 10:58:15.275590] Cache0: Cache line size: 16 kiB
00:20:26.392  [2024-12-15 10:58:15.275598] Cache0: Metadata capacity: 18 MiB
00:20:26.392  [2024-12-15 10:58:15.285051] Cache0: Policy 'always' initialized successfully
00:20:26.392  [2024-12-15 10:58:15.374963] Cache0: Done saving cache state!
00:20:26.392  [2024-12-15 10:58:15.405782] Cache0: Cache attached
00:20:26.392  [2024-12-15 10:58:15.405876] Cache0: Successfully attached
00:20:26.392  [2024-12-15 10:58:15.406156] Cache0: Inserting core Malloc1
00:20:26.392  [2024-12-15 10:58:15.406178] Cache0.Malloc1: Seqential cutoff init
00:20:26.651  [2024-12-15 10:58:15.437349] Cache0.Malloc1: Successfully added
00:20:26.651  Cache0
00:20:26.651   10:58:15	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:26.651   10:58:15	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:26.911  true
00:20:26.911   10:58:15	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:26.911   10:58:15	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 16'
00:20:27.171  true
00:20:27.171   10:58:15	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:27.171   10:58:15	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 16'
00:20:27.430  true
00:20:27.430   10:58:16	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:20:27.690  [2024-12-15 10:58:16.446311] Cache0: Flushing cache
00:20:27.690  [2024-12-15 10:58:16.446351] Cache0: Flushing cache completed
00:20:27.690  [2024-12-15 10:58:16.446831] Cache0.Malloc1: Removing core
00:20:27.690  [2024-12-15 10:58:16.480097] Cache0: Core Malloc1 successfully removed
00:20:27.690  [2024-12-15 10:58:16.480158] Cache0: Stopping cache
00:20:27.690  [2024-12-15 10:58:16.567824] Cache0: Done saving cache state!
00:20:27.690  [2024-12-15 10:58:16.584307] Cache Cache0 successfully stopped
00:20:27.690   10:58:16	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:27.950   10:58:16	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:20:28.210   10:58:17	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:20:28.210   10:58:17	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:28.469  Malloc0
00:20:28.470   10:58:17	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:28.729  Malloc1
00:20:28.729   10:58:17	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 32
00:20:28.989  [2024-12-15 10:58:17.883431] Inserting cache Cache0
00:20:28.989  [2024-12-15 10:58:17.883914] Cache0: Metadata initialized
00:20:28.989  [2024-12-15 10:58:17.884351] Cache0: Successfully added
00:20:28.989  [2024-12-15 10:58:17.884359] Cache0: Cache mode : wt
00:20:28.989  [2024-12-15 10:58:17.895133] Cache0: Super block config offset : 0 kiB
00:20:28.989  [2024-12-15 10:58:17.895172] Cache0: Super block config size : 2200 B
00:20:28.989  [2024-12-15 10:58:17.895179] Cache0: Super block runtime offset : 128 kiB
00:20:28.989  [2024-12-15 10:58:17.895185] Cache0: Super block runtime size : 4 B
00:20:28.989  [2024-12-15 10:58:17.895192] Cache0: Reserved offset : 256 kiB
00:20:28.989  [2024-12-15 10:58:17.895206] Cache0: Reserved size : 128 kiB
00:20:28.989  [2024-12-15 10:58:17.895213] Cache0: Part config offset : 384 kiB
00:20:28.989  [2024-12-15 10:58:17.895219] Cache0: Part config size : 48 kiB
00:20:28.989  [2024-12-15 10:58:17.895225] Cache0: Part runtime offset : 640 kiB
00:20:28.989  [2024-12-15 10:58:17.895231] Cache0: Part runtime size : 72 kiB
00:20:28.989  [2024-12-15 10:58:17.895238] Cache0: Core config offset : 768 kiB
00:20:28.989  [2024-12-15 10:58:17.895244] Cache0: Core config size : 512 kiB
00:20:28.989  [2024-12-15 10:58:17.895250] Cache0: Core runtime offset : 1792 kiB
00:20:28.989  [2024-12-15 10:58:17.895256] Cache0: Core runtime size : 1172 kiB
00:20:28.989  [2024-12-15 10:58:17.895263] Cache0: Core UUID offset : 3072 kiB
00:20:28.989  [2024-12-15 10:58:17.895269] Cache0: Core UUID size : 16384 kiB
00:20:28.989  [2024-12-15 10:58:17.895275] Cache0: Cleaning offset : 35840 kiB
00:20:28.989  [2024-12-15 10:58:17.895281] Cache0: Cleaning size : 28 kiB
00:20:28.989  [2024-12-15 10:58:17.895288] Cache0: LRU list offset : 35968 kiB
00:20:28.989  [2024-12-15 10:58:17.895294] Cache0: LRU list size : 20 kiB
00:20:28.989  [2024-12-15 10:58:17.895300] Cache0: Collision offset : 36096 kiB
00:20:28.989  [2024-12-15 10:58:17.895306] Cache0: Collision size : 56 kiB
00:20:28.989  [2024-12-15 10:58:17.895313] Cache0: List info offset : 36224 kiB
00:20:28.989  [2024-12-15 10:58:17.895319] Cache0: List info size : 20 kiB
00:20:28.989  [2024-12-15 10:58:17.895325] Cache0: Hash offset : 36352 kiB
00:20:28.989  [2024-12-15 10:58:17.895331] Cache0: Hash size : 4 kiB
00:20:28.989  [2024-12-15 10:58:17.895338] Cache0: Cache line size: 32 kiB
00:20:28.989  [2024-12-15 10:58:17.895347] Cache0: Metadata capacity: 18 MiB
00:20:28.989  [2024-12-15 10:58:17.905625] Cache0: Policy 'always' initialized successfully
00:20:28.989  [2024-12-15 10:58:17.992431] Cache0: Done saving cache state!
00:20:29.249  [2024-12-15 10:58:18.024075] Cache0: Cache attached
00:20:29.250  [2024-12-15 10:58:18.024169] Cache0: Successfully attached
00:20:29.250  [2024-12-15 10:58:18.024445] Cache0: Inserting core Malloc1
00:20:29.250  [2024-12-15 10:58:18.024469] Cache0.Malloc1: Seqential cutoff init
00:20:29.250  [2024-12-15 10:58:18.055665] Cache0.Malloc1: Successfully added
00:20:29.250  Cache0
00:20:29.250   10:58:18	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:29.250   10:58:18	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:29.509  true
00:20:29.509   10:58:18	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:29.509   10:58:18	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 32'
00:20:29.769  true
00:20:29.769   10:58:18	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:29.769   10:58:18	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 32'
00:20:30.029  true
00:20:30.029   10:58:18	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:20:30.289  [2024-12-15 10:58:19.044627] Cache0: Flushing cache
00:20:30.289  [2024-12-15 10:58:19.044667] Cache0: Flushing cache completed
00:20:30.289  [2024-12-15 10:58:19.045035] Cache0.Malloc1: Removing core
00:20:30.289  [2024-12-15 10:58:19.077145] Cache0: Core Malloc1 successfully removed
00:20:30.289  [2024-12-15 10:58:19.077207] Cache0: Stopping cache
00:20:30.289  [2024-12-15 10:58:19.161242] Cache0: Done saving cache state!
00:20:30.289  [2024-12-15 10:58:19.179529] Cache Cache0 successfully stopped
00:20:30.289   10:58:19	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:30.549   10:58:19	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:20:30.809   10:58:19	-- management/configuration-change.sh@20 -- # for cache_line_size in "${cache_line_sizes[@]}"
00:20:30.809   10:58:19	-- management/configuration-change.sh@21 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:31.069  Malloc0
00:20:31.069   10:58:19	-- management/configuration-change.sh@22 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:31.328  Malloc1
00:20:31.328   10:58:20	-- management/configuration-change.sh@23 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1 --cache-line-size 64
00:20:31.588  [2024-12-15 10:58:20.480304] Inserting cache Cache0
00:20:31.588  [2024-12-15 10:58:20.480732] Cache0: Metadata initialized
00:20:31.588  [2024-12-15 10:58:20.481171] Cache0: Successfully added
00:20:31.588  [2024-12-15 10:58:20.481179] Cache0: Cache mode : wt
00:20:31.588  [2024-12-15 10:58:20.491025] Cache0: Super block config offset : 0 kiB
00:20:31.588  [2024-12-15 10:58:20.491048] Cache0: Super block config size : 2200 B
00:20:31.588  [2024-12-15 10:58:20.491055] Cache0: Super block runtime offset : 128 kiB
00:20:31.589  [2024-12-15 10:58:20.491062] Cache0: Super block runtime size : 4 B
00:20:31.589  [2024-12-15 10:58:20.491069] Cache0: Reserved offset : 256 kiB
00:20:31.589  [2024-12-15 10:58:20.491075] Cache0: Reserved size : 128 kiB
00:20:31.589  [2024-12-15 10:58:20.491081] Cache0: Part config offset : 384 kiB
00:20:31.589  [2024-12-15 10:58:20.491088] Cache0: Part config size : 48 kiB
00:20:31.589  [2024-12-15 10:58:20.491094] Cache0: Part runtime offset : 640 kiB
00:20:31.589  [2024-12-15 10:58:20.491100] Cache0: Part runtime size : 72 kiB
00:20:31.589  [2024-12-15 10:58:20.491106] Cache0: Core config offset : 768 kiB
00:20:31.589  [2024-12-15 10:58:20.491112] Cache0: Core config size : 512 kiB
00:20:31.589  [2024-12-15 10:58:20.491119] Cache0: Core runtime offset : 1792 kiB
00:20:31.589  [2024-12-15 10:58:20.491125] Cache0: Core runtime size : 1172 kiB
00:20:31.589  [2024-12-15 10:58:20.491131] Cache0: Core UUID offset : 3072 kiB
00:20:31.589  [2024-12-15 10:58:20.491138] Cache0: Core UUID size : 16384 kiB
00:20:31.589  [2024-12-15 10:58:20.491144] Cache0: Cleaning offset : 35840 kiB
00:20:31.589  [2024-12-15 10:58:20.491150] Cache0: Cleaning size : 16 kiB
00:20:31.589  [2024-12-15 10:58:20.491157] Cache0: LRU list offset : 35968 kiB
00:20:31.589  [2024-12-15 10:58:20.491163] Cache0: LRU list size : 12 kiB
00:20:31.589  [2024-12-15 10:58:20.491169] Cache0: Collision offset : 36096 kiB
00:20:31.589  [2024-12-15 10:58:20.491175] Cache0: Collision size : 44 kiB
00:20:31.589  [2024-12-15 10:58:20.491182] Cache0: List info offset : 36224 kiB
00:20:31.589  [2024-12-15 10:58:20.491188] Cache0: List info size : 12 kiB
00:20:31.589  [2024-12-15 10:58:20.491194] Cache0: Hash offset : 36352 kiB
00:20:31.589  [2024-12-15 10:58:20.491201] Cache0: Hash size : 4 kiB
00:20:31.589  [2024-12-15 10:58:20.491207] Cache0: Cache line size: 64 kiB
00:20:31.589  [2024-12-15 10:58:20.491215] Cache0: Metadata capacity: 18 MiB
00:20:31.589  [2024-12-15 10:58:20.500630] Cache0: Policy 'always' initialized successfully
00:20:31.589  [2024-12-15 10:58:20.584778] Cache0: Done saving cache state!
00:20:31.848  [2024-12-15 10:58:20.616006] Cache0: Cache attached
00:20:31.848  [2024-12-15 10:58:20.616103] Cache0: Successfully attached
00:20:31.848  [2024-12-15 10:58:20.616383] Cache0: Inserting core Malloc1
00:20:31.848  [2024-12-15 10:58:20.616404] Cache0.Malloc1: Seqential cutoff init
00:20:31.849  [2024-12-15 10:58:20.647282] Cache0.Malloc1: Successfully added
00:20:31.849  Cache0
00:20:31.849   10:58:20	-- management/configuration-change.sh@25 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:31.849   10:58:20	-- management/configuration-change.sh@25 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:32.108  true
00:20:32.108   10:58:20	-- management/configuration-change.sh@29 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:32.108   10:58:20	-- management/configuration-change.sh@29 -- # jq -e '.[0] | .driver_specific.cache_line_size == 64'
00:20:32.368  true
00:20:32.368   10:58:21	-- management/configuration-change.sh@31 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:32.368   10:58:21	-- management/configuration-change.sh@31 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.cache_line_size == 64'
00:20:32.627  true
00:20:32.627   10:58:21	-- management/configuration-change.sh@34 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_delete Cache0
00:20:32.888  [2024-12-15 10:58:21.644171] Cache0: Flushing cache
00:20:32.888  [2024-12-15 10:58:21.644210] Cache0: Flushing cache completed
00:20:32.888  [2024-12-15 10:58:21.644575] Cache0.Malloc1: Removing core
00:20:32.888  [2024-12-15 10:58:21.677732] Cache0: Core Malloc1 successfully removed
00:20:32.888  [2024-12-15 10:58:21.677798] Cache0: Stopping cache
00:20:32.888  [2024-12-15 10:58:21.761185] Cache0: Done saving cache state!
00:20:32.888  [2024-12-15 10:58:21.781140] Cache Cache0 successfully stopped
00:20:32.888   10:58:21	-- management/configuration-change.sh@35 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:20:33.148   10:58:22	-- management/configuration-change.sh@36 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:20:33.408   10:58:22	-- management/configuration-change.sh@40 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc0
00:20:33.667  Malloc0
00:20:33.667   10:58:22	-- management/configuration-change.sh@41 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 101 512 -b Malloc1
00:20:33.927  Malloc1
00:20:33.927   10:58:22	-- management/configuration-change.sh@42 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_create Cache0 wt Malloc0 Malloc1
00:20:34.187  [2024-12-15 10:58:23.087055] Inserting cache Cache0
00:20:34.187  [2024-12-15 10:58:23.087533] Cache0: Metadata initialized
00:20:34.187  [2024-12-15 10:58:23.087978] Cache0: Successfully added
00:20:34.187  [2024-12-15 10:58:23.087986] Cache0: Cache mode : wt
00:20:34.187  [2024-12-15 10:58:23.098859] Cache0: Super block config offset : 0 kiB
00:20:34.187  [2024-12-15 10:58:23.098886] Cache0: Super block config size : 2200 B
00:20:34.187  [2024-12-15 10:58:23.098893] Cache0: Super block runtime offset : 128 kiB
00:20:34.187  [2024-12-15 10:58:23.098899] Cache0: Super block runtime size : 4 B
00:20:34.187  [2024-12-15 10:58:23.098906] Cache0: Reserved offset : 256 kiB
00:20:34.187  [2024-12-15 10:58:23.098912] Cache0: Reserved size : 128 kiB
00:20:34.187  [2024-12-15 10:58:23.098919] Cache0: Part config offset : 384 kiB
00:20:34.187  [2024-12-15 10:58:23.098925] Cache0: Part config size : 48 kiB
00:20:34.187  [2024-12-15 10:58:23.098931] Cache0: Part runtime offset : 640 kiB
00:20:34.187  [2024-12-15 10:58:23.098938] Cache0: Part runtime size : 72 kiB
00:20:34.187  [2024-12-15 10:58:23.098944] Cache0: Core config offset : 768 kiB
00:20:34.187  [2024-12-15 10:58:23.098950] Cache0: Core config size : 512 kiB
00:20:34.187  [2024-12-15 10:58:23.098957] Cache0: Core runtime offset : 1792 kiB
00:20:34.187  [2024-12-15 10:58:23.098963] Cache0: Core runtime size : 1172 kiB
00:20:34.187  [2024-12-15 10:58:23.098969] Cache0: Core UUID offset : 3072 kiB
00:20:34.187  [2024-12-15 10:58:23.098975] Cache0: Core UUID size : 16384 kiB
00:20:34.187  [2024-12-15 10:58:23.098982] Cache0: Cleaning offset : 35840 kiB
00:20:34.187  [2024-12-15 10:58:23.098988] Cache0: Cleaning size : 196 kiB
00:20:34.187  [2024-12-15 10:58:23.098994] Cache0: LRU list offset : 36096 kiB
00:20:34.187  [2024-12-15 10:58:23.099001] Cache0: LRU list size : 148 kiB
00:20:34.187  [2024-12-15 10:58:23.099007] Cache0: Collision offset : 36352 kiB
00:20:34.187  [2024-12-15 10:58:23.099013] Cache0: Collision size : 196 kiB
00:20:34.187  [2024-12-15 10:58:23.099019] Cache0: List info offset : 36608 kiB
00:20:34.187  [2024-12-15 10:58:23.099026] Cache0: List info size : 148 kiB
00:20:34.187  [2024-12-15 10:58:23.099032] Cache0: Hash offset : 36864 kiB
00:20:34.187  [2024-12-15 10:58:23.099038] Cache0: Hash size : 20 kiB
00:20:34.187  [2024-12-15 10:58:23.099045] Cache0: Cache line size: 4 kiB
00:20:34.187  [2024-12-15 10:58:23.099053] Cache0: Metadata capacity: 18 MiB
00:20:34.188  [2024-12-15 10:58:23.109391] Cache0: Policy 'always' initialized successfully
00:20:34.447  [2024-12-15 10:58:23.223360] Cache0: Done saving cache state!
00:20:34.447  [2024-12-15 10:58:23.255004] Cache0: Cache attached
00:20:34.447  [2024-12-15 10:58:23.255098] Cache0: Successfully attached
00:20:34.447  [2024-12-15 10:58:23.255378] Cache0: Inserting core Malloc1
00:20:34.447  [2024-12-15 10:58:23.255401] Cache0.Malloc1: Seqential cutoff init
00:20:34.447  [2024-12-15 10:58:23.286859] Cache0.Malloc1: Successfully added
00:20:34.447  Cache0
00:20:34.447   10:58:23	-- management/configuration-change.sh@44 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_get_bdevs
00:20:34.447   10:58:23	-- management/configuration-change.sh@44 -- # jq -e '.[0] | .started and .cache.attached and .core.attached'
00:20:34.708  true
00:20:34.708   10:58:23	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:34.708   10:58:23	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wt
00:20:34.968  [2024-12-15 10:58:23.790157] Cache0: Cache mode 'Write Through' is already set
00:20:34.968  wt
00:20:34.968   10:58:23	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:34.968   10:58:23	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wt"'
00:20:35.227  true
00:20:35.227   10:58:24	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:35.227   10:58:24	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wt"'
00:20:35.487  true
00:20:35.487   10:58:24	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:35.487   10:58:24	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wb
00:20:35.746  [2024-12-15 10:58:24.540290] Cache0: Changing cache mode from 'Write Through' to 'Write Back' successful
00:20:35.746  wb
00:20:35.746   10:58:24	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:35.746   10:58:24	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wb"'
00:20:36.006  true
00:20:36.006   10:58:24	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:36.006   10:58:24	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wb"'
00:20:36.266  true
00:20:36.266   10:58:25	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:36.266   10:58:25	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 pt
00:20:36.266  [2024-12-15 10:58:25.274533] Cache0: Changing cache mode from 'Write Back' to 'Pass Through' successful
00:20:36.266  pt
00:20:36.526   10:58:25	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "pt"'
00:20:36.526   10:58:25	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:37.096  true
00:20:37.096   10:58:25	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:37.096   10:58:25	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "pt"'
00:20:37.096  true
00:20:37.096   10:58:26	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:37.096   10:58:26	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wa
00:20:37.665  [2024-12-15 10:58:26.570096] Cache0: Changing cache mode from 'Pass Through' to 'Write Around' successful
00:20:37.665  wa
00:20:37.665   10:58:26	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:37.665   10:58:26	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wa"'
00:20:37.925  true
00:20:37.925   10:58:26	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:37.925   10:58:26	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wa"'
00:20:38.185  true
00:20:38.185   10:58:27	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:38.185   10:58:27	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wi
00:20:38.444  [2024-12-15 10:58:27.308249] Cache0: Changing cache mode from 'Write Around' to 'Write Invalidate' successful
00:20:38.444  wi
00:20:38.444   10:58:27	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:38.444   10:58:27	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wi"'
00:20:39.014  true
00:20:39.014   10:58:27	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:39.014   10:58:27	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wi"'
00:20:39.273  true
00:20:39.273   10:58:28	-- management/configuration-change.sh@48 -- # for cache_mode in "${cache_modes[@]}"
00:20:39.273   10:58:28	-- management/configuration-change.sh@49 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_cache_mode Cache0 wo
00:20:39.534  [2024-12-15 10:58:28.343172] Cache0: Changing cache mode from 'Write Invalidate' to 'Write Only' successful
00:20:39.534  wo
00:20:39.534   10:58:28	-- management/configuration-change.sh@52 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Cache0
00:20:39.534   10:58:28	-- management/configuration-change.sh@52 -- # jq -e '.[0] | .driver_specific.mode == "wo"'
00:20:39.796  true
00:20:39.796   10:58:28	-- management/configuration-change.sh@54 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py save_subsystem_config -n bdev
00:20:39.796   10:58:28	-- management/configuration-change.sh@54 -- # jq -e '.config | .[] | select(.method == "bdev_ocf_create") | .params.mode == "wo"'
00:20:40.056  true
00:20:40.056   10:58:28	-- management/configuration-change.sh@59 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p always -t 64
00:20:40.317  [2024-12-15 10:58:29.093323] Cache0.Malloc1: Changing sequential cutoff policy from full to always
00:20:40.317  [2024-12-15 10:58:29.093392] Cache0.Malloc1: Changing sequential cutoff threshold from 1024 to 65536 bytes successful
00:20:40.317   10:58:29	-- management/configuration-change.sh@60 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py bdev_ocf_set_seqcutoff Cache0 -p never -t 16
00:20:40.577  [2024-12-15 10:58:29.342030] Cache0.Malloc1: Changing sequential cutoff policy from always to never
00:20:40.577  [2024-12-15 10:58:29.342096] Cache0.Malloc1: Changing sequential cutoff threshold from 65536 to 16384 bytes successful
00:20:40.577   10:58:29	-- management/configuration-change.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:20:40.577   10:58:29	-- management/configuration-change.sh@63 -- # killprocess 2209137
00:20:40.577   10:58:29	-- common/autotest_common.sh@936 -- # '[' -z 2209137 ']'
00:20:40.577   10:58:29	-- common/autotest_common.sh@940 -- # kill -0 2209137
00:20:40.577    10:58:29	-- common/autotest_common.sh@941 -- # uname
00:20:40.577   10:58:29	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:20:40.577    10:58:29	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2209137
00:20:40.577   10:58:29	-- common/autotest_common.sh@942 -- # process_name=reactor_0
00:20:40.577   10:58:29	-- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']'
00:20:40.577   10:58:29	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2209137'
00:20:40.577  killing process with pid 2209137
00:20:40.577   10:58:29	-- common/autotest_common.sh@955 -- # kill 2209137
00:20:40.577   10:58:29	-- common/autotest_common.sh@960 -- # wait 2209137
00:20:40.577  [2024-12-15 10:58:29.582105] Cache0: Flushing cache
00:20:40.577  [2024-12-15 10:58:29.582155] Cache0: Flushing cache completed
00:20:40.577  [2024-12-15 10:58:29.582209] Cache0: Stopping cache
00:20:40.837  [2024-12-15 10:58:29.689818] Cache0: Done saving cache state!
00:20:40.837  [2024-12-15 10:58:29.706933] Cache Cache0 successfully stopped
00:20:41.407  
00:20:41.407  real	0m23.233s
00:20:41.407  user	0m39.745s
00:20:41.407  sys	0m3.718s
00:20:41.407   10:58:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:41.407   10:58:30	-- common/autotest_common.sh@10 -- # set +x
00:20:41.407  ************************************
00:20:41.407  END TEST ocf_configuration_change
00:20:41.407  ************************************
00:20:41.407  
00:20:41.407  real	1m50.263s
00:20:41.407  user	2m54.695s
00:20:41.407  sys	0m18.333s
00:20:41.407   10:58:30	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:20:41.407   10:58:30	-- common/autotest_common.sh@10 -- # set +x
00:20:41.407  ************************************
00:20:41.407  END TEST ocf
00:20:41.407  ************************************
00:20:41.407   10:58:30	-- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']'
00:20:41.407   10:58:30	-- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:20:41.407   10:58:30	-- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']'
00:20:41.407   10:58:30	-- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:20:41.407   10:58:30	-- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:20:41.407   10:58:30	-- spdk/autotest.sh@353 -- # [[ 1 -eq 1 ]]
00:20:41.407   10:58:30	-- spdk/autotest.sh@354 -- # run_test scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh
00:20:41.407   10:58:30	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:41.407   10:58:30	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:41.407   10:58:30	-- common/autotest_common.sh@10 -- # set +x
00:20:41.407  ************************************
00:20:41.407  START TEST scheduler
00:20:41.407  ************************************
00:20:41.407   10:58:30	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/scheduler.sh
00:20:41.407  * Looking for test storage...
00:20:41.407  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:20:41.407    10:58:30	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:41.407     10:58:30	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:41.407     10:58:30	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:41.407    10:58:30	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:41.407    10:58:30	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:41.407    10:58:30	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:41.407    10:58:30	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:41.407    10:58:30	-- scripts/common.sh@335 -- # IFS=.-:
00:20:41.407    10:58:30	-- scripts/common.sh@335 -- # read -ra ver1
00:20:41.407    10:58:30	-- scripts/common.sh@336 -- # IFS=.-:
00:20:41.407    10:58:30	-- scripts/common.sh@336 -- # read -ra ver2
00:20:41.407    10:58:30	-- scripts/common.sh@337 -- # local 'op=<'
00:20:41.407    10:58:30	-- scripts/common.sh@339 -- # ver1_l=2
00:20:41.407    10:58:30	-- scripts/common.sh@340 -- # ver2_l=1
00:20:41.407    10:58:30	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:41.407    10:58:30	-- scripts/common.sh@343 -- # case "$op" in
00:20:41.407    10:58:30	-- scripts/common.sh@344 -- # : 1
00:20:41.407    10:58:30	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:41.407    10:58:30	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:41.407     10:58:30	-- scripts/common.sh@364 -- # decimal 1
00:20:41.407     10:58:30	-- scripts/common.sh@352 -- # local d=1
00:20:41.407     10:58:30	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:41.407     10:58:30	-- scripts/common.sh@354 -- # echo 1
00:20:41.407    10:58:30	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:41.407     10:58:30	-- scripts/common.sh@365 -- # decimal 2
00:20:41.407     10:58:30	-- scripts/common.sh@352 -- # local d=2
00:20:41.407     10:58:30	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:41.407     10:58:30	-- scripts/common.sh@354 -- # echo 2
00:20:41.407    10:58:30	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:41.407    10:58:30	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:41.407    10:58:30	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:41.407    10:58:30	-- scripts/common.sh@367 -- # return 0
00:20:41.407    10:58:30	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:41.407    10:58:30	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:41.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:41.407  		--rc genhtml_branch_coverage=1
00:20:41.407  		--rc genhtml_function_coverage=1
00:20:41.407  		--rc genhtml_legend=1
00:20:41.407  		--rc geninfo_all_blocks=1
00:20:41.407  		--rc geninfo_unexecuted_blocks=1
00:20:41.407  		
00:20:41.407  		'
00:20:41.407    10:58:30	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:41.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:41.407  		--rc genhtml_branch_coverage=1
00:20:41.407  		--rc genhtml_function_coverage=1
00:20:41.407  		--rc genhtml_legend=1
00:20:41.407  		--rc geninfo_all_blocks=1
00:20:41.407  		--rc geninfo_unexecuted_blocks=1
00:20:41.407  		
00:20:41.407  		'
00:20:41.407    10:58:30	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:41.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:41.407  		--rc genhtml_branch_coverage=1
00:20:41.407  		--rc genhtml_function_coverage=1
00:20:41.407  		--rc genhtml_legend=1
00:20:41.407  		--rc geninfo_all_blocks=1
00:20:41.407  		--rc geninfo_unexecuted_blocks=1
00:20:41.407  		
00:20:41.407  		'
00:20:41.407    10:58:30	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:41.407  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:41.407  		--rc genhtml_branch_coverage=1
00:20:41.407  		--rc genhtml_function_coverage=1
00:20:41.407  		--rc genhtml_legend=1
00:20:41.407  		--rc geninfo_all_blocks=1
00:20:41.407  		--rc geninfo_unexecuted_blocks=1
00:20:41.407  		
00:20:41.407  		'
00:20:41.407   10:58:30	-- scheduler/scheduler.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/isolate_cores.sh
00:20:41.407    10:58:30	-- scheduler/isolate_cores.sh@6 -- # xtrace_disable
00:20:41.407    10:58:30	-- common/autotest_common.sh@10 -- # set +x
00:20:41.978  Moving 2212292 (PF_SUPERPRIV,PF_RANDOMIZE) to / from N/A
00:20:41.978  Moving 2212292 (PF_SUPERPRIV,PF_RANDOMIZE) to /cpuset from N/A
00:20:41.978   10:58:30	-- scheduler/scheduler.sh@12 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/setup.sh
00:20:43.888  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:20:43.888  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:20:43.888  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:20:43.888   10:58:32	-- scheduler/scheduler.sh@14 -- # run_test idle /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh
00:20:43.888   10:58:32	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:20:43.888   10:58:32	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:20:43.888   10:58:32	-- common/autotest_common.sh@10 -- # set +x
00:20:43.888  ************************************
00:20:43.888  START TEST idle
00:20:43.888  ************************************
00:20:43.888   10:58:32	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/idle.sh
00:20:43.888  * Looking for test storage...
00:20:43.888  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:20:43.888    10:58:32	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:20:43.888     10:58:32	-- common/autotest_common.sh@1690 -- # lcov --version
00:20:43.888     10:58:32	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:20:43.888    10:58:32	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:20:43.888    10:58:32	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:20:43.888    10:58:32	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:20:43.888    10:58:32	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:20:43.888    10:58:32	-- scripts/common.sh@335 -- # IFS=.-:
00:20:43.888    10:58:32	-- scripts/common.sh@335 -- # read -ra ver1
00:20:43.888    10:58:32	-- scripts/common.sh@336 -- # IFS=.-:
00:20:43.888    10:58:32	-- scripts/common.sh@336 -- # read -ra ver2
00:20:43.888    10:58:32	-- scripts/common.sh@337 -- # local 'op=<'
00:20:43.888    10:58:32	-- scripts/common.sh@339 -- # ver1_l=2
00:20:43.888    10:58:32	-- scripts/common.sh@340 -- # ver2_l=1
00:20:43.888    10:58:32	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:20:43.888    10:58:32	-- scripts/common.sh@343 -- # case "$op" in
00:20:43.888    10:58:32	-- scripts/common.sh@344 -- # : 1
00:20:43.888    10:58:32	-- scripts/common.sh@363 -- # (( v = 0 ))
00:20:43.888    10:58:32	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:43.888     10:58:32	-- scripts/common.sh@364 -- # decimal 1
00:20:43.888     10:58:32	-- scripts/common.sh@352 -- # local d=1
00:20:43.888     10:58:32	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:43.888     10:58:32	-- scripts/common.sh@354 -- # echo 1
00:20:43.888    10:58:32	-- scripts/common.sh@364 -- # ver1[v]=1
00:20:43.888     10:58:32	-- scripts/common.sh@365 -- # decimal 2
00:20:43.888     10:58:32	-- scripts/common.sh@352 -- # local d=2
00:20:43.888     10:58:32	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:43.888     10:58:32	-- scripts/common.sh@354 -- # echo 2
00:20:43.888    10:58:32	-- scripts/common.sh@365 -- # ver2[v]=2
00:20:43.888    10:58:32	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:20:43.888    10:58:32	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:20:43.888    10:58:32	-- scripts/common.sh@367 -- # return 0
00:20:43.888    10:58:32	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:43.888    10:58:32	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:20:43.888  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:43.888  		--rc genhtml_branch_coverage=1
00:20:43.888  		--rc genhtml_function_coverage=1
00:20:43.888  		--rc genhtml_legend=1
00:20:43.888  		--rc geninfo_all_blocks=1
00:20:43.888  		--rc geninfo_unexecuted_blocks=1
00:20:43.888  		
00:20:43.888  		'
00:20:43.888    10:58:32	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:20:43.888  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:43.888  		--rc genhtml_branch_coverage=1
00:20:43.888  		--rc genhtml_function_coverage=1
00:20:43.888  		--rc genhtml_legend=1
00:20:43.888  		--rc geninfo_all_blocks=1
00:20:43.888  		--rc geninfo_unexecuted_blocks=1
00:20:43.888  		
00:20:43.888  		'
00:20:43.888    10:58:32	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:20:43.888  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:43.888  		--rc genhtml_branch_coverage=1
00:20:43.889  		--rc genhtml_function_coverage=1
00:20:43.889  		--rc genhtml_legend=1
00:20:43.889  		--rc geninfo_all_blocks=1
00:20:43.889  		--rc geninfo_unexecuted_blocks=1
00:20:43.889  		
00:20:43.889  		'
00:20:43.889    10:58:32	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:20:43.889  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:43.889  		--rc genhtml_branch_coverage=1
00:20:43.889  		--rc genhtml_function_coverage=1
00:20:43.889  		--rc genhtml_legend=1
00:20:43.889  		--rc geninfo_all_blocks=1
00:20:43.889  		--rc geninfo_unexecuted_blocks=1
00:20:43.889  		
00:20:43.889  		'
00:20:43.889   10:58:32	-- scheduler/idle.sh@11 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:20:43.889    10:58:32	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:20:43.889    10:58:32	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:20:43.889    10:58:32	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:20:43.889    10:58:32	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:20:43.889    10:58:32	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:20:43.889    10:58:32	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:20:43.889     10:58:32	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:20:43.889      10:58:32	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:20:43.889      10:58:32	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:20:43.889      10:58:32	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:20:43.889      10:58:32	-- scheduler/cgroups.sh@10 -- # echo 2
00:20:43.889     10:58:32	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:20:43.889   10:58:32	-- scheduler/idle.sh@13 -- # trap 'killprocess "$spdk_pid"' EXIT
00:20:43.889   10:58:32	-- scheduler/idle.sh@71 -- # idle
00:20:43.889   10:58:32	-- scheduler/idle.sh@36 -- # local reactor_framework
00:20:43.889   10:58:32	-- scheduler/idle.sh@37 -- # local reactors thread
00:20:43.889   10:58:32	-- scheduler/idle.sh@38 -- # local thread_cpumask
00:20:43.889   10:58:32	-- scheduler/idle.sh@39 -- # local threads
00:20:43.889   10:58:32	-- scheduler/idle.sh@41 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1
00:20:43.889   10:58:32	-- scheduler/common.sh@405 -- # [[ -e /proc//status ]]
00:20:43.889   10:58:32	-- scheduler/common.sh@409 -- # spdk_pid=2213531
00:20:43.889   10:58:32	-- scheduler/common.sh@411 -- # waitforlisten 2213531
00:20:43.889   10:58:32	-- scheduler/common.sh@408 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/build/bin/spdk_tgt -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc
00:20:43.889   10:58:32	-- common/autotest_common.sh@829 -- # '[' -z 2213531 ']'
00:20:43.889   10:58:32	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:43.889   10:58:32	-- common/autotest_common.sh@834 -- # local max_retries=100
00:20:43.889   10:58:32	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:43.889  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:43.889   10:58:32	-- common/autotest_common.sh@838 -- # xtrace_disable
00:20:43.889   10:58:32	-- common/autotest_common.sh@10 -- # set +x
00:20:43.889  [2024-12-15 10:58:32.733721] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:20:43.889  [2024-12-15 10:58:32.733792] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2213531 ]
00:20:43.889  EAL: No free 2048 kB hugepages reported on node 1
00:20:43.889  [2024-12-15 10:58:32.826983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:20:44.149  [2024-12-15 10:58:32.942529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:20:44.149  [2024-12-15 10:58:32.942768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:20:44.149  [2024-12-15 10:58:32.942841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:20:44.149  [2024-12-15 10:58:32.942896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:20:44.149  [2024-12-15 10:58:32.942876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:20:44.149  [2024-12-15 10:58:32.942921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:20:44.149  [2024-12-15 10:58:32.942945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:20:44.149  [2024-12-15 10:58:32.942990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:20:44.149  [2024-12-15 10:58:32.942992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:20:44.720   10:58:33	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:20:44.720   10:58:33	-- common/autotest_common.sh@862 -- # return 0
00:20:44.720   10:58:33	-- scheduler/common.sh@412 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic
00:20:46.102  POWER: Env isn't set yet!
00:20:46.102  POWER: Attempting to initialise ACPI cpufreq power management...
00:20:46.102  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:20:46.102  POWER: Cannot set governor of lcore 1 to userspace
00:20:46.102  POWER: Attempting to initialise PSTAT power management...
00:20:46.102  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 1 power management
00:20:46.102  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 2 power management
00:20:46.102  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 3 power management
00:20:46.102  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 4 power management
00:20:46.102  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 37 power management
00:20:46.102  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 38 power management
00:20:46.102  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 39 power management
00:20:46.102  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:20:46.102  POWER: Initialized successfully for lcore 40 power management
00:20:46.102  [2024-12-15 10:58:35.004858] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:20:46.102  [2024-12-15 10:58:35.004893] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:20:46.102  [2024-12-15 10:58:35.004909] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:20:46.102   10:58:35	-- scheduler/common.sh@413 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:20:46.671  [2024-12-15 10:58:35.459323] 'OCF_Core' volume operations registered
00:20:46.672  [2024-12-15 10:58:35.463563] 'OCF_Cache' volume operations registered
00:20:46.672  [2024-12-15 10:58:35.468417] 'OCF Composite' volume operations registered
00:20:46.672  [2024-12-15 10:58:35.472714] 'SPDK_block_device' volume operations registered
00:20:46.672   10:58:35	-- scheduler/idle.sh@48 -- # get_thread_stats_current
00:20:46.672   10:58:35	-- scheduler/common.sh@418 -- # xtrace_disable
00:20:46.672   10:58:35	-- common/autotest_common.sh@10 -- # set +x
00:20:48.580   10:58:37	-- scheduler/idle.sh@50 -- # xtrace_disable
00:20:48.580   10:58:37	-- common/autotest_common.sh@10 -- # set +x
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:20:48.580  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:20:48.840  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:20:49.100  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:20:49.100  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:20:49.100  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:20:51.010  [load:  2%, idle: 326913652, busy:   9773838] app_thread is idle
00:20:51.010  [load:  0%, idle: 303257064, busy:    228054] nvmf_tgt_poll_group_0 is idle
00:20:51.010  [load:  0%, idle: 302521950, busy:    227768] nvmf_tgt_poll_group_1 is idle
00:20:51.010  [load:  0%, idle: 302577918, busy:    227796] nvmf_tgt_poll_group_2 is idle
00:20:51.010  [load:  0%, idle: 302468024, busy:    227756] nvmf_tgt_poll_group_3 is idle
00:20:51.010  [load:  0%, idle: 302719884, busy:    227428] nvmf_tgt_poll_group_4 is idle
00:20:51.010  [load:  0%, idle: 304353506, busy:    227854] nvmf_tgt_poll_group_5 is idle
00:20:51.010  [load:  0%, idle: 303711090, busy:    238358] nvmf_tgt_poll_group_6 is idle
00:20:51.010  [load:  0%, idle: 302924854, busy:    227404] nvmf_tgt_poll_group_7 is idle
00:20:51.010  [load:  0%, idle: 307384696, busy:    229780] iscsi_poll_group_1 is idle
00:20:51.010  [load:  0%, idle: 306963584, busy:    228922] iscsi_poll_group_2 is idle
00:20:51.010  [load:  0%, idle: 308354516, busy:    229112] iscsi_poll_group_3 is idle
00:20:51.010  [load:  0%, idle: 306652846, busy:    229702] iscsi_poll_group_4 is idle
00:20:51.010  [load:  0%, idle: 306653414, busy:    236510] iscsi_poll_group_37 is idle
00:20:51.010  [load:  0%, idle: 306542092, busy:    236034] iscsi_poll_group_38 is idle
00:20:51.010  [load:  0%, idle: 306221688, busy:    236154] iscsi_poll_group_39 is idle
00:20:51.010  [load:  0%, idle: 306731768, busy:    236626] iscsi_poll_group_40 is idle
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:20:51.010  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:20:51.270  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:20:53.180  [load:  3%, idle: 327891548, busy:  11056838] app_thread is idle
00:20:53.180  [load:  0%, idle: 304008648, busy:    250598] nvmf_tgt_poll_group_0 is idle
00:20:53.180  [load:  0%, idle: 303209826, busy:    250090] nvmf_tgt_poll_group_1 is idle
00:20:53.180  [load:  0%, idle: 303065940, busy:    265050] nvmf_tgt_poll_group_2 is idle
00:20:53.180  [load:  0%, idle: 302984510, busy:    250868] nvmf_tgt_poll_group_3 is idle
00:20:53.180  [load:  0%, idle: 303365890, busy:    251130] nvmf_tgt_poll_group_4 is idle
00:20:53.180  [load:  0%, idle: 304991862, busy:    250246] nvmf_tgt_poll_group_5 is idle
00:20:53.180  [load:  0%, idle: 304186814, busy:    250312] nvmf_tgt_poll_group_6 is idle
00:20:53.180  [load:  0%, idle: 303428090, busy:    250018] nvmf_tgt_poll_group_7 is idle
00:20:53.180  [load:  0%, idle: 308424494, busy:    252402] iscsi_poll_group_1 is idle
00:20:53.180  [load:  0%, idle: 307332784, busy:    252230] iscsi_poll_group_2 is idle
00:20:53.180  [load:  0%, idle: 308834646, busy:    252156] iscsi_poll_group_3 is idle
00:20:53.180  [load:  0%, idle: 307132910, busy:    252722] iscsi_poll_group_4 is idle
00:20:53.180  [load:  0%, idle: 307187846, busy:    273738] iscsi_poll_group_37 is idle
00:20:53.180  [load:  0%, idle: 306868880, busy:    259854] iscsi_poll_group_38 is idle
00:20:53.180  [load:  0%, idle: 306602606, busy:    260836] iscsi_poll_group_39 is idle
00:20:53.181  [load:  0%, idle: 307750616, busy:    260714] iscsi_poll_group_40 is idle
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:20:53.181  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:20:53.440  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:20:53.700  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:20:53.700  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:20:55.610  [load:  3%, idle: 327479866, busy:  12462268] app_thread is idle
00:20:55.610  [load:  0%, idle: 303517016, busy:    295218] nvmf_tgt_poll_group_0 is idle
00:20:55.610  [load:  0%, idle: 302868460, busy:    294442] nvmf_tgt_poll_group_1 is idle
00:20:55.610  [load:  0%, idle: 302422428, busy:    294356] nvmf_tgt_poll_group_2 is idle
00:20:55.610  [load:  0%, idle: 303260274, busy:    294462] nvmf_tgt_poll_group_3 is idle
00:20:55.610  [load:  0%, idle: 302766512, busy:    294692] nvmf_tgt_poll_group_4 is idle
00:20:55.610  [load:  0%, idle: 303627102, busy:    309008] nvmf_tgt_poll_group_5 is idle
00:20:55.610  [load:  0%, idle: 303648834, busy:    294952] nvmf_tgt_poll_group_6 is idle
00:20:55.610  [load:  0%, idle: 302634714, busy:    294496] nvmf_tgt_poll_group_7 is idle
00:20:55.610  [load:  0%, idle: 307547194, busy:    296314] iscsi_poll_group_1 is idle
00:20:55.610  [load:  0%, idle: 306598076, busy:    296506] iscsi_poll_group_2 is idle
00:20:55.610  [load:  0%, idle: 308059470, busy:    296276] iscsi_poll_group_3 is idle
00:20:55.610  [load:  0%, idle: 306613242, busy:    297242] iscsi_poll_group_4 is idle
00:20:55.610  [load:  0%, idle: 306608772, busy:    305356] iscsi_poll_group_37 is idle
00:20:55.610  [load:  0%, idle: 306227498, busy:    316342] iscsi_poll_group_38 is idle
00:20:55.610  [load:  0%, idle: 306420416, busy:    305240] iscsi_poll_group_39 is idle
00:20:55.610  [load:  0%, idle: 306872998, busy:    305870] iscsi_poll_group_40 is idle
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:20:55.610  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:20:55.869  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:20:55.869  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:20:55.869  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:20:55.869  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:20:55.869  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:20:57.780  [load:  4%, idle: 328358490, busy:  14277478] app_thread is idle
00:20:57.780  [load:  0%, idle: 303422752, busy:    334132] nvmf_tgt_poll_group_0 is idle
00:20:57.780  [load:  0%, idle: 302781382, busy:    333700] nvmf_tgt_poll_group_1 is idle
00:20:57.780  [load:  0%, idle: 303285204, busy:    334028] nvmf_tgt_poll_group_2 is idle
00:20:57.780  [load:  0%, idle: 302952232, busy:    333948] nvmf_tgt_poll_group_3 is idle
00:20:57.780  [load:  0%, idle: 303105394, busy:    346564] nvmf_tgt_poll_group_4 is idle
00:20:57.780  [load:  0%, idle: 304025754, busy:    333946] nvmf_tgt_poll_group_5 is idle
00:20:57.780  [load:  0%, idle: 304059360, busy:    333968] nvmf_tgt_poll_group_6 is idle
00:20:57.780  [load:  0%, idle: 303085956, busy:    333928] nvmf_tgt_poll_group_7 is idle
00:20:57.780  [load:  0%, idle: 307548404, busy:    344168] iscsi_poll_group_1 is idle
00:20:57.780  [load:  0%, idle: 307229726, busy:    343092] iscsi_poll_group_2 is idle
00:20:57.780  [load:  0%, idle: 308905504, busy:    343314] iscsi_poll_group_3 is idle
00:20:57.780  [load:  0%, idle: 307175744, busy:    357976] iscsi_poll_group_4 is idle
00:20:57.780  [load:  0%, idle: 307530384, busy:    354294] iscsi_poll_group_37 is idle
00:20:57.780  [load:  0%, idle: 306544286, busy:    353676] iscsi_poll_group_38 is idle
00:20:57.780  [load:  0%, idle: 307011640, busy:    353618] iscsi_poll_group_39 is idle
00:20:57.780  [load:  0%, idle: 307388052, busy:    354720] iscsi_poll_group_40 is idle
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread app_thread cpumask: 0x2
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_0 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_1 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_2 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_3 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_4 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_5 cpumask: 0x1e00000001e
00:20:57.780  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_6 cpumask: 0x1e00000001e
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread nvmf_tgt_poll_group_7 cpumask: 0x1e00000001e
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_1 cpumask: 0x2
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_2 cpumask: 0x4
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_3 cpumask: 0x8
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_4 cpumask: 0x10
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_37 cpumask: 0x2000000000
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_38 cpumask: 0x4000000000
00:20:58.040  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_39 cpumask: 0x8000000000
00:20:58.300  SPDK cpumask: [1,2,3,4,37,38,39,40] Thread iscsi_poll_group_40 cpumask: 0x10000000000
00:21:00.213  [load:  4%, idle: 330795782, busy:  17172662] app_thread is idle
00:21:00.213  [load:  0%, idle: 305223916, busy:    385068] nvmf_tgt_poll_group_0 is idle
00:21:00.213  [load:  0%, idle: 304374368, busy:    385166] nvmf_tgt_poll_group_1 is idle
00:21:00.213  [load:  0%, idle: 303834912, busy:    405696] nvmf_tgt_poll_group_2 is idle
00:21:00.213  [load:  0%, idle: 304399174, busy:    385712] nvmf_tgt_poll_group_3 is idle
00:21:00.213  [load:  0%, idle: 304724580, busy:    385630] nvmf_tgt_poll_group_4 is idle
00:21:00.213  [load:  0%, idle: 305305774, busy:    385012] nvmf_tgt_poll_group_5 is idle
00:21:00.213  [load:  0%, idle: 305122486, busy:    384616] nvmf_tgt_poll_group_6 is idle
00:21:00.213  [load:  0%, idle: 304626044, busy:    384922] nvmf_tgt_poll_group_7 is idle
00:21:00.213  [load:  0%, idle: 308881916, busy:    406980] iscsi_poll_group_1 is idle
00:21:00.213  [load:  0%, idle: 308210860, busy:    388350] iscsi_poll_group_2 is idle
00:21:00.213  [load:  0%, idle: 309565610, busy:    388286] iscsi_poll_group_3 is idle
00:21:00.213  [load:  0%, idle: 308096804, busy:    389658] iscsi_poll_group_4 is idle
00:21:00.213  [load:  0%, idle: 308244036, busy:    400096] iscsi_poll_group_37 is idle
00:21:00.213  [load:  0%, idle: 307772510, busy:    400642] iscsi_poll_group_38 is idle
00:21:00.213  [load:  0%, idle: 307710706, busy:    415996] iscsi_poll_group_39 is idle
00:21:00.213  [load:  0%, idle: 308669756, busy:    401618] iscsi_poll_group_40 is idle
00:21:00.213   10:58:48	-- scheduler/idle.sh@1 -- # killprocess 2213531
00:21:00.213   10:58:48	-- common/autotest_common.sh@936 -- # '[' -z 2213531 ']'
00:21:00.213   10:58:48	-- common/autotest_common.sh@940 -- # kill -0 2213531
00:21:00.213    10:58:48	-- common/autotest_common.sh@941 -- # uname
00:21:00.213   10:58:48	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:00.213    10:58:48	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2213531
00:21:00.213   10:58:48	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:21:00.213   10:58:48	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:21:00.213   10:58:48	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2213531'
00:21:00.213  killing process with pid 2213531
00:21:00.213   10:58:48	-- common/autotest_common.sh@955 -- # kill 2213531
00:21:00.213   10:58:48	-- common/autotest_common.sh@960 -- # wait 2213531
00:21:00.213  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:21:00.213  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:21:00.213  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:21:00.786  
00:21:00.786  real	0m17.018s
00:21:00.786  user	0m43.158s
00:21:00.786  sys	0m1.904s
00:21:00.786   10:58:49	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:00.786   10:58:49	-- common/autotest_common.sh@10 -- # set +x
00:21:00.786  ************************************
00:21:00.786  END TEST idle
00:21:00.786  ************************************
00:21:00.786   10:58:49	-- scheduler/scheduler.sh@16 -- # run_test dpdk_governor /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh
00:21:00.786   10:58:49	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:00.786   10:58:49	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:00.786   10:58:49	-- common/autotest_common.sh@10 -- # set +x
00:21:00.786  ************************************
00:21:00.786  START TEST dpdk_governor
00:21:00.786  ************************************
00:21:00.786   10:58:49	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/governor.sh
00:21:00.786  * Looking for test storage...
00:21:00.786  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:21:00.786    10:58:49	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:00.786     10:58:49	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:00.786     10:58:49	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:00.786    10:58:49	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:00.786    10:58:49	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:00.786    10:58:49	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:00.786    10:58:49	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:00.786    10:58:49	-- scripts/common.sh@335 -- # IFS=.-:
00:21:00.786    10:58:49	-- scripts/common.sh@335 -- # read -ra ver1
00:21:00.786    10:58:49	-- scripts/common.sh@336 -- # IFS=.-:
00:21:00.786    10:58:49	-- scripts/common.sh@336 -- # read -ra ver2
00:21:00.786    10:58:49	-- scripts/common.sh@337 -- # local 'op=<'
00:21:00.786    10:58:49	-- scripts/common.sh@339 -- # ver1_l=2
00:21:00.786    10:58:49	-- scripts/common.sh@340 -- # ver2_l=1
00:21:00.786    10:58:49	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:00.786    10:58:49	-- scripts/common.sh@343 -- # case "$op" in
00:21:00.786    10:58:49	-- scripts/common.sh@344 -- # : 1
00:21:00.786    10:58:49	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:00.786    10:58:49	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:00.786     10:58:49	-- scripts/common.sh@364 -- # decimal 1
00:21:00.786     10:58:49	-- scripts/common.sh@352 -- # local d=1
00:21:00.786     10:58:49	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:00.786     10:58:49	-- scripts/common.sh@354 -- # echo 1
00:21:00.786    10:58:49	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:00.786     10:58:49	-- scripts/common.sh@365 -- # decimal 2
00:21:00.786     10:58:49	-- scripts/common.sh@352 -- # local d=2
00:21:00.786     10:58:49	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:00.786     10:58:49	-- scripts/common.sh@354 -- # echo 2
00:21:00.786    10:58:49	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:00.786    10:58:49	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:00.786    10:58:49	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:00.786    10:58:49	-- scripts/common.sh@367 -- # return 0
00:21:00.786    10:58:49	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:00.786    10:58:49	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:00.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.786  		--rc genhtml_branch_coverage=1
00:21:00.786  		--rc genhtml_function_coverage=1
00:21:00.786  		--rc genhtml_legend=1
00:21:00.786  		--rc geninfo_all_blocks=1
00:21:00.786  		--rc geninfo_unexecuted_blocks=1
00:21:00.786  		
00:21:00.786  		'
00:21:00.786    10:58:49	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:00.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.786  		--rc genhtml_branch_coverage=1
00:21:00.786  		--rc genhtml_function_coverage=1
00:21:00.786  		--rc genhtml_legend=1
00:21:00.786  		--rc geninfo_all_blocks=1
00:21:00.786  		--rc geninfo_unexecuted_blocks=1
00:21:00.786  		
00:21:00.786  		'
00:21:00.786    10:58:49	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:00.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.786  		--rc genhtml_branch_coverage=1
00:21:00.786  		--rc genhtml_function_coverage=1
00:21:00.786  		--rc genhtml_legend=1
00:21:00.786  		--rc geninfo_all_blocks=1
00:21:00.786  		--rc geninfo_unexecuted_blocks=1
00:21:00.786  		
00:21:00.786  		'
00:21:00.786    10:58:49	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:00.786  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:00.786  		--rc genhtml_branch_coverage=1
00:21:00.786  		--rc genhtml_function_coverage=1
00:21:00.786  		--rc genhtml_legend=1
00:21:00.786  		--rc geninfo_all_blocks=1
00:21:00.786  		--rc geninfo_unexecuted_blocks=1
00:21:00.786  		
00:21:00.786  		'
00:21:00.786   10:58:49	-- scheduler/governor.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:21:00.786    10:58:49	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:21:00.786    10:58:49	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:21:00.786    10:58:49	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:21:00.787    10:58:49	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:21:00.787    10:58:49	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:21:00.787    10:58:49	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:21:00.787     10:58:49	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:21:00.787      10:58:49	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:21:00.787      10:58:49	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:21:00.787      10:58:49	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:21:00.787      10:58:49	-- scheduler/cgroups.sh@10 -- # echo 2
00:21:00.787     10:58:49	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:21:00.787   10:58:49	-- scheduler/governor.sh@12 -- # trap 'killprocess "$spdk_pid" || :; restore_cpufreq' EXIT
00:21:00.787   10:58:49	-- scheduler/governor.sh@157 -- # map_cpufreq
00:21:00.787   10:58:49	-- scheduler/common.sh@243 -- # cpufreq_drivers=()
00:21:00.787   10:58:49	-- scheduler/common.sh@243 -- # local -g cpufreq_drivers
00:21:00.787   10:58:49	-- scheduler/common.sh@244 -- # cpufreq_governors=()
00:21:00.787   10:58:49	-- scheduler/common.sh@244 -- # local -g cpufreq_governors
00:21:00.787   10:58:49	-- scheduler/common.sh@245 -- # cpufreq_base_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@245 -- # local -g cpufreq_base_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@246 -- # cpufreq_max_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@246 -- # local -g cpufreq_max_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@247 -- # cpufreq_min_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@247 -- # local -g cpufreq_min_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@248 -- # cpufreq_cur_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@248 -- # local -g cpufreq_cur_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@249 -- # cpufreq_is_turbo=()
00:21:00.787   10:58:49	-- scheduler/common.sh@249 -- # local -g cpufreq_is_turbo
00:21:00.787   10:58:49	-- scheduler/common.sh@250 -- # cpufreq_available_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@250 -- # local -g cpufreq_available_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@251 -- # cpufreq_available_governors=()
00:21:00.787   10:58:49	-- scheduler/common.sh@251 -- # local -g cpufreq_available_governors
00:21:00.787   10:58:49	-- scheduler/common.sh@252 -- # cpufreq_high_prio=()
00:21:00.787   10:58:49	-- scheduler/common.sh@252 -- # local -g cpufreq_high_prio
00:21:00.787   10:58:49	-- scheduler/common.sh@253 -- # cpufreq_non_turbo_ratio=()
00:21:00.787   10:58:49	-- scheduler/common.sh@253 -- # local -g cpufreq_non_turbo_ratio
00:21:00.787   10:58:49	-- scheduler/common.sh@254 -- # cpufreq_setspeed=()
00:21:00.787   10:58:49	-- scheduler/common.sh@254 -- # local -g cpufreq_setspeed
00:21:00.787   10:58:49	-- scheduler/common.sh@255 -- # cpuinfo_max_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@255 -- # local -g cpuinfo_max_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@256 -- # cpuinfo_min_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@256 -- # local -g cpuinfo_min_freqs
00:21:00.787   10:58:49	-- scheduler/common.sh@257 -- # local -g turbo_enabled=0
00:21:00.787   10:58:49	-- scheduler/common.sh@258 -- # local cpu cpu_idx
00:21:00.787   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:00.787   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=0
00:21:00.787   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq ]]
00:21:00.787   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:00.787   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:00.787   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu0/cpufreq/base_frequency ]]
00:21:00.787   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:00.787   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000166
00:21:00.787   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:00.787   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_0
00:21:00.787   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_0[@]'
00:21:00.787   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:00.787   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_0
00:21:00.787   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_0[@]'
00:21:00.787   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:00.787   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:00.787    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 0 0xce
00:21:00.787   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:00.787   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:00.787   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:00.787   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:00.787   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:00.787   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:00.787   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:00.787   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:00.787   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:00.787   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.787   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.787   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.787   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:00.787   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=1
00:21:00.787   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq ]]
00:21:00.787   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:00.787   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:00.787   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu1/cpufreq/base_frequency ]]
00:21:00.787   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:00.787   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:00.787   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_1
00:21:00.787   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_1[@]'
00:21:00.788   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:00.788   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_1
00:21:00.788   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_1[@]'
00:21:00.788   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:00.788   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:00.788    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 1 0xce
00:21:00.788   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:00.788   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:00.788   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:00.788   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:00.788   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:00.788   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:00.788   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:00.788   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:00.788   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:00.788   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:00.788   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:00.788   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=10
00:21:00.788   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq ]]
00:21:00.788   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:00.788   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:00.788   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu10/cpufreq/base_frequency ]]
00:21:00.788   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:00.788   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:00.788   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:00.788   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:00.788   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_10
00:21:00.788   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_10[@]'
00:21:00.788   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:00.788   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_10
00:21:00.788   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_10[@]'
00:21:00.788   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:00.788   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:00.788    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 10 0xce
00:21:00.788   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:00.788   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:00.788   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:00.788   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:00.788   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:00.788   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:00.788   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:00.788   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:00.788   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:00.788   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:00.788   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.788   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.788   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:00.788   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:00.789   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:00.789   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:00.789   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:00.789   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=11
00:21:00.789   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq ]]
00:21:00.789   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:00.789   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:00.789   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu11/cpufreq/base_frequency ]]
00:21:00.789   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:00.789   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:00.789   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:00.789   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:00.789   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_11
00:21:00.789   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_11[@]'
00:21:00.789   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:00.789   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_11
00:21:00.789   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_11[@]'
00:21:00.789   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:00.789   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:00.789    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 11 0xce
00:21:00.789   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.055   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.055   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.055   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.055   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.055   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.055   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.055   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.055   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.055   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.055   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.055   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.055   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.055   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.055   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.055   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.055   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.055   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.055   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.055   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.056   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=12
00:21:01.056   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq ]]
00:21:01.056   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.056   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.056   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu12/cpufreq/base_frequency ]]
00:21:01.056   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.056   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.056   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.056   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.056   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_12
00:21:01.056   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_12[@]'
00:21:01.056   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.056   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_12
00:21:01.056   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_12[@]'
00:21:01.056   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.056   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.056    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 12 0xce
00:21:01.056   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.056   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.056   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.056   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.056   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.056   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.056   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.056   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.056   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.056   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.056   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.056   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.056   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.056   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.056   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=13
00:21:01.056   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq ]]
00:21:01.056   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.056   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.056   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu13/cpufreq/base_frequency ]]
00:21:01.057   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.057   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_13
00:21:01.057   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_13[@]'
00:21:01.057   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.057   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_13
00:21:01.057   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_13[@]'
00:21:01.057   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.057   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.057    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 13 0xce
00:21:01.057   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.057   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.057   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.057   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.057   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.057   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.057   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.057   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.057   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.057   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=14
00:21:01.057   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq ]]
00:21:01.057   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.057   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.057   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu14/cpufreq/base_frequency ]]
00:21:01.057   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.057   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_14
00:21:01.057   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_14[@]'
00:21:01.057   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.057   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_14
00:21:01.057   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_14[@]'
00:21:01.057   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.057   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.057    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 14 0xce
00:21:01.057   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.057   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.057   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.057   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.057   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.057   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.057   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.057   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.057   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.057   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.057   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.057   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.057   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.058   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=15
00:21:01.058   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq ]]
00:21:01.058   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.058   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.058   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu15/cpufreq/base_frequency ]]
00:21:01.058   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.058   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.058   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_15
00:21:01.058   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_15[@]'
00:21:01.058   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.058   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_15
00:21:01.058   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_15[@]'
00:21:01.058   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.058   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.058    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 15 0xce
00:21:01.058   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.058   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.058   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.058   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.058   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.058   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.058   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.058   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.058   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.058   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.058   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.058   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.058   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.058   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=16
00:21:01.058   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq ]]
00:21:01.058   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.058   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.058   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu16/cpufreq/base_frequency ]]
00:21:01.058   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.058   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.058   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.058   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_16
00:21:01.058   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_16[@]'
00:21:01.058   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.058   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_16
00:21:01.058   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_16[@]'
00:21:01.058   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.058   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.059    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 16 0xce
00:21:01.059   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.059   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.059   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.059   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.059   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.059   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.059   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.059   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.059   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.059   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.059   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.059   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=17
00:21:01.059   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq ]]
00:21:01.059   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.059   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.059   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu17/cpufreq/base_frequency ]]
00:21:01.059   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.059   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000120
00:21:01.059   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.059   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.059   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_17
00:21:01.059   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_17[@]'
00:21:01.059   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.059   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_17
00:21:01.059   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_17[@]'
00:21:01.059   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.059   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.059    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 17 0xce
00:21:01.059   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.059   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.059   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.059   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.059   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.059   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.059   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.059   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.059   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.059   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.059   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.059   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.059   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.059   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.060   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=18
00:21:01.060   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq ]]
00:21:01.060   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.060   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.060   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu18/cpufreq/base_frequency ]]
00:21:01.060   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.060   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.060   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.060   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.060   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_18
00:21:01.060   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_18[@]'
00:21:01.060   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.060   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_18
00:21:01.060   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_18[@]'
00:21:01.060   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.060   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.060    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 18 0xce
00:21:01.060   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.060   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.060   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.060   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.060   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.060   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.060   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.060   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.060   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.060   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.060   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.060   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.060   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.060   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.060   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=19
00:21:01.060   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq ]]
00:21:01.060   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.060   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.060   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu19/cpufreq/base_frequency ]]
00:21:01.060   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.061   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:01.061   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.061   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_19
00:21:01.061   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_19[@]'
00:21:01.061   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.061   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_19
00:21:01.061   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_19[@]'
00:21:01.061   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.061   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.061    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 19 0xce
00:21:01.061   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.061   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.061   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.061   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.061   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.061   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.061   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.061   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.061   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.061   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.061   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=2
00:21:01.061   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq ]]
00:21:01.061   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.061   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.061   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu2/cpufreq/base_frequency ]]
00:21:01.061   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_2
00:21:01.061   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_2[@]'
00:21:01.061   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.061   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_2
00:21:01.061   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_2[@]'
00:21:01.061   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.061   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.061    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 2 0xce
00:21:01.061   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.061   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.061   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.061   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.061   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.061   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.061   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.061   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.061   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.061   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.061   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.061   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.061   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.062   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=20
00:21:01.062   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq ]]
00:21:01.062   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.062   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.062   10:58:49	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu20/cpufreq/base_frequency ]]
00:21:01.062   10:58:49	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.062   10:58:49	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.062   10:58:49	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:01.062   10:58:49	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.062   10:58:49	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_20
00:21:01.062   10:58:49	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_20[@]'
00:21:01.062   10:58:49	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.062   10:58:49	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_20
00:21:01.062   10:58:49	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_20[@]'
00:21:01.062   10:58:49	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.062   10:58:49	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.062    10:58:49	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 20 0xce
00:21:01.062   10:58:49	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.062   10:58:49	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.062   10:58:49	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.062   10:58:49	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.062   10:58:49	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.062   10:58:49	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.062   10:58:49	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.062   10:58:49	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.062   10:58:49	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.062   10:58:49	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.062   10:58:49	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.062   10:58:49	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.062   10:58:49	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.062   10:58:49	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.062   10:58:49	-- scheduler/common.sh@261 -- # cpu_idx=21
00:21:01.062   10:58:49	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq ]]
00:21:01.062   10:58:49	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.062   10:58:49	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.062   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu21/cpufreq/base_frequency ]]
00:21:01.062   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.062   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.062   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:01.062   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.062   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_21
00:21:01.062   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_21[@]'
00:21:01.062   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.062   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_21
00:21:01.062   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_21[@]'
00:21:01.062   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.062   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.062    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 21 0xce
00:21:01.063   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.063   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.063   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.063   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.063   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.063   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.063   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.063   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.063   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.063   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.063   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.063   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=22
00:21:01.063   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq ]]
00:21:01.063   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.063   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.063   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu22/cpufreq/base_frequency ]]
00:21:01.063   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.063   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.063   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=3700000
00:21:01.063   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.063   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_22
00:21:01.063   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_22[@]'
00:21:01.063   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.063   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_22
00:21:01.063   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_22[@]'
00:21:01.063   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.063   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.063    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 22 0xce
00:21:01.063   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.063   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.063   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.063   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.063   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.063   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.063   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.063   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.063   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.063   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.063   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.063   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.063   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.063   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.064   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=23
00:21:01.064   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq ]]
00:21:01.064   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.064   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.064   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu23/cpufreq/base_frequency ]]
00:21:01.064   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.064   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.064   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.064   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.064   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_23
00:21:01.064   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_23[@]'
00:21:01.064   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.064   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_23
00:21:01.064   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_23[@]'
00:21:01.064   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.064   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.064    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 23 0xce
00:21:01.064   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.064   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.064   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.064   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.064   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.064   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.064   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.064   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.064   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.064   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.064   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.064   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.064   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.064   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.329   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.330   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=24
00:21:01.330   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq ]]
00:21:01.330   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.330   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.330   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu24/cpufreq/base_frequency ]]
00:21:01.330   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.330   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_24
00:21:01.330   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_24[@]'
00:21:01.330   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.330   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_24
00:21:01.330   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_24[@]'
00:21:01.330   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.330   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.330    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 24 0xce
00:21:01.330   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.330   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.330   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.330   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.330   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.330   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.330   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.330   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.330   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.330   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=25
00:21:01.330   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq ]]
00:21:01.330   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.330   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.330   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu25/cpufreq/base_frequency ]]
00:21:01.330   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000028
00:21:01.330   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.330   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_25
00:21:01.330   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_25[@]'
00:21:01.330   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.330   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_25
00:21:01.330   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_25[@]'
00:21:01.330   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.330   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.330    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 25 0xce
00:21:01.330   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.330   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.330   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.330   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.330   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.330   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.330   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.330   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.330   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.330   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.330   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.330   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.330   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.331   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=26
00:21:01.331   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq ]]
00:21:01.331   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.331   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.331   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu26/cpufreq/base_frequency ]]
00:21:01.331   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.331   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999997
00:21:01.331   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.331   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.331   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_26
00:21:01.331   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_26[@]'
00:21:01.331   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.331   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_26
00:21:01.331   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_26[@]'
00:21:01.331   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.331   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.331    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 26 0xce
00:21:01.331   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.331   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.331   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.331   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.331   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.331   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.331   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.331   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.331   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.331   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.331   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.331   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.331   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.331   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.331   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=27
00:21:01.332   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq ]]
00:21:01.332   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.332   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.332   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu27/cpufreq/base_frequency ]]
00:21:01.332   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.332   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_27
00:21:01.332   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_27[@]'
00:21:01.332   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.332   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_27
00:21:01.332   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_27[@]'
00:21:01.332   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.332   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.332    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 27 0xce
00:21:01.332   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.332   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.332   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.332   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.332   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.332   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.332   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.332   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.332   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.332   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=28
00:21:01.332   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq ]]
00:21:01.332   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.332   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.332   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu28/cpufreq/base_frequency ]]
00:21:01.332   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.332   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_28
00:21:01.332   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_28[@]'
00:21:01.332   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.332   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_28
00:21:01.332   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_28[@]'
00:21:01.332   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.332   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.332    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 28 0xce
00:21:01.332   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.332   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.332   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.332   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.332   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.332   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.332   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.332   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.332   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.332   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.332   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.332   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.332   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.333   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=29
00:21:01.333   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq ]]
00:21:01.333   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.333   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.333   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu29/cpufreq/base_frequency ]]
00:21:01.333   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.333   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.333   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.333   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_29
00:21:01.333   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_29[@]'
00:21:01.333   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.333   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_29
00:21:01.333   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_29[@]'
00:21:01.333   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.333   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.333    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 29 0xce
00:21:01.333   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.333   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.333   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.333   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.333   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.333   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.333   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.333   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.333   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.333   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.333   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.333   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.333   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.333   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=3
00:21:01.333   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq ]]
00:21:01.333   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.333   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.333   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu3/cpufreq/base_frequency ]]
00:21:01.333   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2298560
00:21:01.333   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=2300000
00:21:01.333   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_3
00:21:01.333   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_3[@]'
00:21:01.333   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.333   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_3
00:21:01.334   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_3[@]'
00:21:01.334   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.334   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.334    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 3 0xce
00:21:01.334   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.334   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.334   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.334   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.334   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.334   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.334   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.334   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.334   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.334   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.334   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.334   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=30
00:21:01.334   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq ]]
00:21:01.334   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.334   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.334   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu30/cpufreq/base_frequency ]]
00:21:01.334   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.334   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000371
00:21:01.334   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.334   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.334   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_30
00:21:01.334   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_30[@]'
00:21:01.334   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.334   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_30
00:21:01.334   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_30[@]'
00:21:01.334   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.334   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.334    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 30 0xce
00:21:01.334   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.334   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.334   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.334   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.334   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.334   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.334   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.334   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.334   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.334   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.334   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.334   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.334   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.334   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.335   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=31
00:21:01.335   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq ]]
00:21:01.335   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.335   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.335   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu31/cpufreq/base_frequency ]]
00:21:01.335   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.335   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.335   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.335   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.335   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_31
00:21:01.335   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_31[@]'
00:21:01.335   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.335   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_31
00:21:01.335   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_31[@]'
00:21:01.335   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.335   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.335    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 31 0xce
00:21:01.335   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.335   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.335   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.335   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.335   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.335   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.335   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.335   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.335   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.335   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.335   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.335   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.335   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.335   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.335   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=32
00:21:01.335   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq ]]
00:21:01.335   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.335   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.335   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu32/cpufreq/base_frequency ]]
00:21:01.335   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.335   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999615
00:21:01.335   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.336   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_32
00:21:01.336   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_32[@]'
00:21:01.336   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.336   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_32
00:21:01.336   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_32[@]'
00:21:01.336   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.336   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.336    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 32 0xce
00:21:01.336   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.336   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.336   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.336   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.336   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.336   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.336   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.336   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.336   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.336   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.336   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=33
00:21:01.336   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq ]]
00:21:01.336   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.336   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.336   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu33/cpufreq/base_frequency ]]
00:21:01.336   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.336   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.336   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_33
00:21:01.336   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_33[@]'
00:21:01.336   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.336   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_33
00:21:01.336   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_33[@]'
00:21:01.336   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.336   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.336    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 33 0xce
00:21:01.336   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.336   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.336   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.336   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.336   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.336   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.336   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.336   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.336   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.336   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.336   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.336   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.336   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.336   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.337   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=34
00:21:01.337   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq ]]
00:21:01.337   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.337   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.337   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu34/cpufreq/base_frequency ]]
00:21:01.337   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.337   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000300
00:21:01.337   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.337   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.337   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_34
00:21:01.337   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_34[@]'
00:21:01.337   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.337   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_34
00:21:01.337   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_34[@]'
00:21:01.337   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.337   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.337    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 34 0xce
00:21:01.337   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.337   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.337   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.337   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.337   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.337   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.337   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.337   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.337   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.337   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.337   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.337   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.337   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.337   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.337   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=35
00:21:01.337   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq ]]
00:21:01.337   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.337   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.337   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu35/cpufreq/base_frequency ]]
00:21:01.337   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.337   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000128
00:21:01.337   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.337   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.337   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_35
00:21:01.337   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_35[@]'
00:21:01.337   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.337   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_35
00:21:01.337   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_35[@]'
00:21:01.337   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.337   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.602    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 35 0xce
00:21:01.602   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.602   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.602   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.602   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.602   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.602   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.602   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.602   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.602   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.602   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.602   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.602   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=36
00:21:01.602   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq ]]
00:21:01.602   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.602   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.602   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu36/cpufreq/base_frequency ]]
00:21:01.602   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.602   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=999960
00:21:01.602   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.602   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.602   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_36
00:21:01.602   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_36[@]'
00:21:01.602   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.602   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_36
00:21:01.602   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_36[@]'
00:21:01.602   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.602   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.602    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 36 0xce
00:21:01.602   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.602   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.602   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.602   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.602   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.602   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.602   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.602   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.602   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.602   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.602   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.602   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.602   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.602   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.603   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=37
00:21:01.603   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq ]]
00:21:01.603   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.603   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.603   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu37/cpufreq/base_frequency ]]
00:21:01.603   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.603   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_37
00:21:01.603   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_37[@]'
00:21:01.603   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.603   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_37
00:21:01.603   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_37[@]'
00:21:01.603   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.603   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.603    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 37 0xce
00:21:01.603   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.603   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.603   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.603   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.603   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.603   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.603   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.603   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.603   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.603   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.603   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.603   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.603   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.603   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=38
00:21:01.603   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq ]]
00:21:01.604   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.604   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.604   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu38/cpufreq/base_frequency ]]
00:21:01.604   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_38
00:21:01.604   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_38[@]'
00:21:01.604   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.604   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_38
00:21:01.604   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_38[@]'
00:21:01.604   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.604   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.604    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 38 0xce
00:21:01.604   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.604   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.604   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.604   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.604   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.604   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.604   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.604   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.604   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.604   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=39
00:21:01.604   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq ]]
00:21:01.604   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.604   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.604   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu39/cpufreq/base_frequency ]]
00:21:01.604   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2299996
00:21:01.604   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_39
00:21:01.604   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_39[@]'
00:21:01.604   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.604   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_39
00:21:01.604   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_39[@]'
00:21:01.604   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.604   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.604    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 39 0xce
00:21:01.604   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.604   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.604   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.604   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.604   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.604   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.604   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.604   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.604   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.604   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.604   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.604   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.604   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.605   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=4
00:21:01.605   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq ]]
00:21:01.605   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.605   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.605   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu4/cpufreq/base_frequency ]]
00:21:01.605   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2300559
00:21:01.605   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_4
00:21:01.605   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_4[@]'
00:21:01.605   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.605   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_4
00:21:01.605   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_4[@]'
00:21:01.605   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.605   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.605    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 4 0xce
00:21:01.605   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.605   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.605   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.605   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.605   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.605   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.605   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.605   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.605   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.605   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.605   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.605   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.605   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=40
00:21:01.605   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq ]]
00:21:01.605   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.605   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.605   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu40/cpufreq/base_frequency ]]
00:21:01.605   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.605   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.605   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_40
00:21:01.605   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_40[@]'
00:21:01.605   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.605   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_40
00:21:01.605   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_40[@]'
00:21:01.605   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.606   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.606    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 40 0xce
00:21:01.606   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.606   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.606   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.606   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.606   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.606   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.606   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.606   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.606   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.606   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.606   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.606   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=41
00:21:01.606   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq ]]
00:21:01.606   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.606   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.606   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu41/cpufreq/base_frequency ]]
00:21:01.606   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.606   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.606   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.606   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.606   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_41
00:21:01.606   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_41[@]'
00:21:01.606   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.606   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_41
00:21:01.606   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_41[@]'
00:21:01.606   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.606   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.606    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 41 0xce
00:21:01.606   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.606   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.606   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.606   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.606   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.606   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.606   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.606   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.606   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.606   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.606   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.606   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.606   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.606   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.607   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=42
00:21:01.607   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq ]]
00:21:01.607   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.607   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.607   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu42/cpufreq/base_frequency ]]
00:21:01.607   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.607   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.607   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_42
00:21:01.607   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_42[@]'
00:21:01.607   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.607   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_42
00:21:01.607   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_42[@]'
00:21:01.607   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.607   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.607    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 42 0xce
00:21:01.607   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.607   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.607   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.607   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.607   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.607   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.607   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.607   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.607   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.607   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.607   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.607   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.607   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.607   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=43
00:21:01.607   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq ]]
00:21:01.607   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.607   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.607   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu43/cpufreq/base_frequency ]]
00:21:01.607   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.607   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.607   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.607   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_43
00:21:01.607   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_43[@]'
00:21:01.607   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.607   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_43
00:21:01.608   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_43[@]'
00:21:01.608   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.608   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.608    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 43 0xce
00:21:01.608   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.608   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.608   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.608   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.608   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.608   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.608   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.608   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.608   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.608   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.608   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.608   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=44
00:21:01.608   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq ]]
00:21:01.608   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.608   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.608   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu44/cpufreq/base_frequency ]]
00:21:01.608   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.608   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.608   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.608   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.608   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_44
00:21:01.608   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_44[@]'
00:21:01.608   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.608   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_44
00:21:01.608   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_44[@]'
00:21:01.608   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.608   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.608    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 44 0xce
00:21:01.608   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.608   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.608   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.608   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.608   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.608   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.608   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.608   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.608   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.608   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.608   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.608   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.608   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.608   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.873   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=45
00:21:01.873   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq ]]
00:21:01.873   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.873   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.873   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu45/cpufreq/base_frequency ]]
00:21:01.873   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.873   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.873   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.873   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.873   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_45
00:21:01.873   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_45[@]'
00:21:01.873   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.873   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_45
00:21:01.873   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_45[@]'
00:21:01.873   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.873   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.873    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 45 0xce
00:21:01.873   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.873   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.873   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.873   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.873   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.873   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.873   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.873   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.873   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.873   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.873   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.873   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.873   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.873   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.874   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=46
00:21:01.874   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq ]]
00:21:01.874   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.874   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.874   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu46/cpufreq/base_frequency ]]
00:21:01.874   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.874   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.874   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.874   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.874   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_46
00:21:01.874   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_46[@]'
00:21:01.874   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.874   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_46
00:21:01.874   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_46[@]'
00:21:01.874   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.874   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.874    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 46 0xce
00:21:01.874   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.874   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.874   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.874   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.874   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.874   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.874   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.874   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.874   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.874   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.874   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.874   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.874   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.874   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.875   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=47
00:21:01.875   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq ]]
00:21:01.875   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.875   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.875   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu47/cpufreq/base_frequency ]]
00:21:01.875   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.875   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_47
00:21:01.875   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_47[@]'
00:21:01.875   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.875   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_47
00:21:01.875   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_47[@]'
00:21:01.875   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.875   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.875    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 47 0xce
00:21:01.875   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.875   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.875   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.875   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.875   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.875   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.875   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.875   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.875   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.875   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=48
00:21:01.875   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq ]]
00:21:01.875   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.875   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.875   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu48/cpufreq/base_frequency ]]
00:21:01.875   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.875   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_48
00:21:01.875   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_48[@]'
00:21:01.875   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.875   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_48
00:21:01.875   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_48[@]'
00:21:01.875   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.875   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.875    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 48 0xce
00:21:01.875   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.875   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.875   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.875   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.875   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.875   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.875   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.875   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.875   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.875   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.875   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.875   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.875   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.876   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=49
00:21:01.876   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq ]]
00:21:01.876   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.876   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.876   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu49/cpufreq/base_frequency ]]
00:21:01.876   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.876   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.876   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.876   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.876   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_49
00:21:01.876   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_49[@]'
00:21:01.876   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.876   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_49
00:21:01.876   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_49[@]'
00:21:01.876   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.876   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.876    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 49 0xce
00:21:01.876   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.876   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.876   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.876   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.876   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.876   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.876   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.876   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.876   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.876   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.876   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.876   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.876   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.876   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.876   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=5
00:21:01.876   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq ]]
00:21:01.876   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.876   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.876   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu5/cpufreq/base_frequency ]]
00:21:01.876   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.877   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000290
00:21:01.877   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.877   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.877   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_5
00:21:01.877   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_5[@]'
00:21:01.877   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.877   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_5
00:21:01.877   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_5[@]'
00:21:01.877   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.877   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.877    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 5 0xce
00:21:01.877   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.877   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.877   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.877   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.877   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.877   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.877   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.877   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.877   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.877   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.877   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.877   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=50
00:21:01.877   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq ]]
00:21:01.877   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.877   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.877   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu50/cpufreq/base_frequency ]]
00:21:01.877   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.877   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000063
00:21:01.877   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.877   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.877   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_50
00:21:01.877   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_50[@]'
00:21:01.877   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.877   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_50
00:21:01.877   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_50[@]'
00:21:01.877   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.877   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.877    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 50 0xce
00:21:01.877   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.877   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.877   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.877   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.877   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.877   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.877   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.877   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.877   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.877   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.877   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.877   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.877   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.877   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.878   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=51
00:21:01.878   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq ]]
00:21:01.878   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.878   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.878   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu51/cpufreq/base_frequency ]]
00:21:01.878   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.878   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.878   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.878   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.878   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_51
00:21:01.878   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_51[@]'
00:21:01.878   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.878   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_51
00:21:01.878   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_51[@]'
00:21:01.878   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.878   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.878    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 51 0xce
00:21:01.878   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.878   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.878   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.878   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.878   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.878   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.878   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.878   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.878   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.878   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.878   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.878   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.878   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.878   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.879   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=52
00:21:01.879   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq ]]
00:21:01.879   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.879   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.879   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu52/cpufreq/base_frequency ]]
00:21:01.879   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.879   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_52
00:21:01.879   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_52[@]'
00:21:01.879   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.879   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_52
00:21:01.879   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_52[@]'
00:21:01.879   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.879   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.879    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 52 0xce
00:21:01.879   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.879   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.879   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.879   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.879   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.879   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.879   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.879   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.879   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.879   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=53
00:21:01.879   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq ]]
00:21:01.879   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.879   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.879   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu53/cpufreq/base_frequency ]]
00:21:01.879   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.879   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_53
00:21:01.879   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_53[@]'
00:21:01.879   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.879   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_53
00:21:01.879   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_53[@]'
00:21:01.879   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.879   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.879    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 53 0xce
00:21:01.879   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.879   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.879   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.879   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.879   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.879   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.879   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.879   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.879   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.879   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.879   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.879   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.879   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:01.880   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=54
00:21:01.880   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq ]]
00:21:01.880   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:01.880   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:01.880   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu54/cpufreq/base_frequency ]]
00:21:01.880   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:01.880   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:01.880   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:01.880   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:01.880   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_54
00:21:01.880   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_54[@]'
00:21:01.880   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:01.880   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_54
00:21:01.880   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_54[@]'
00:21:01.880   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:01.880   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:01.880    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 54 0xce
00:21:01.880   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:01.880   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:01.880   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:01.880   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:01.880   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:01.880   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:01.880   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:01.880   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:01.880   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:01.880   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:01.880   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:01.880   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:01.880   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:01.880   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.144   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.145   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=55
00:21:02.145   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq ]]
00:21:02.145   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.145   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.145   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu55/cpufreq/base_frequency ]]
00:21:02.145   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.145   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.145   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.145   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.145   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_55
00:21:02.145   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_55[@]'
00:21:02.145   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.145   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_55
00:21:02.145   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_55[@]'
00:21:02.145   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.145   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.145    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 55 0xce
00:21:02.145   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.145   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.145   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.145   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.145   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.145   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.145   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.145   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.145   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.145   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.145   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.145   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.145   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.145   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.146   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=56
00:21:02.146   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq ]]
00:21:02.146   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.146   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.146   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu56/cpufreq/base_frequency ]]
00:21:02.146   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.146   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_56
00:21:02.146   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_56[@]'
00:21:02.146   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.146   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_56
00:21:02.146   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_56[@]'
00:21:02.146   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.146   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.146    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 56 0xce
00:21:02.146   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.146   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.146   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.146   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.146   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.146   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.146   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.146   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.146   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.146   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=57
00:21:02.146   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq ]]
00:21:02.146   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.146   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.146   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu57/cpufreq/base_frequency ]]
00:21:02.146   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.146   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_57
00:21:02.146   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_57[@]'
00:21:02.146   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.146   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_57
00:21:02.146   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_57[@]'
00:21:02.146   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.146   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.146    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 57 0xce
00:21:02.146   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.146   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.146   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.146   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.146   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.146   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.146   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.146   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.146   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.146   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.146   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.146   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.146   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.147   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=58
00:21:02.147   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq ]]
00:21:02.147   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.147   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.147   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu58/cpufreq/base_frequency ]]
00:21:02.147   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.147   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.147   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.147   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.147   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_58
00:21:02.147   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_58[@]'
00:21:02.147   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.147   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_58
00:21:02.147   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_58[@]'
00:21:02.147   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.147   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.147    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 58 0xce
00:21:02.147   10:58:50	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.147   10:58:50	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.147   10:58:50	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.147   10:58:50	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.147   10:58:50	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.147   10:58:50	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.147   10:58:50	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.147   10:58:50	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.147   10:58:50	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.147   10:58:50	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.147   10:58:50	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.147   10:58:50	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.147   10:58:50	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.147   10:58:50	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.147   10:58:50	-- scheduler/common.sh@261 -- # cpu_idx=59
00:21:02.147   10:58:50	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq ]]
00:21:02.147   10:58:50	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.147   10:58:50	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.147   10:58:50	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu59/cpufreq/base_frequency ]]
00:21:02.147   10:58:50	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.148   10:58:50	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.148   10:58:50	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.148   10:58:50	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.148   10:58:50	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_59
00:21:02.148   10:58:50	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_59[@]'
00:21:02.148   10:58:50	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.148   10:58:50	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_59
00:21:02.148   10:58:50	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_59[@]'
00:21:02.148   10:58:50	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.148   10:58:50	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.148    10:58:50	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 59 0xce
00:21:02.148   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.148   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.148   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.148   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.148   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.148   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.148   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.148   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.148   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.148   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.148   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.148   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=6
00:21:02.148   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq ]]
00:21:02.148   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.148   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.148   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu6/cpufreq/base_frequency ]]
00:21:02.148   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.148   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000030
00:21:02.148   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.148   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.148   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_6
00:21:02.148   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_6[@]'
00:21:02.148   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.148   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_6
00:21:02.148   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_6[@]'
00:21:02.148   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.148   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.148    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 6 0xce
00:21:02.148   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.148   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.148   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.148   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.148   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.148   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.148   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.148   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.148   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.148   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.148   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.148   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.148   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.148   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.149   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=60
00:21:02.149   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq ]]
00:21:02.149   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.149   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.149   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu60/cpufreq/base_frequency ]]
00:21:02.149   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.149   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.149   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.149   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.149   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_60
00:21:02.149   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_60[@]'
00:21:02.149   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.149   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_60
00:21:02.149   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_60[@]'
00:21:02.149   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.149   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.149    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 60 0xce
00:21:02.149   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.149   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.149   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.149   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.149   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.149   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.149   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.149   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.149   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.149   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.149   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.149   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.149   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.149   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.149   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=61
00:21:02.150   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq ]]
00:21:02.150   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.150   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.150   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu61/cpufreq/base_frequency ]]
00:21:02.150   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.150   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_61
00:21:02.150   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_61[@]'
00:21:02.150   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.150   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_61
00:21:02.150   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_61[@]'
00:21:02.150   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.150   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.150    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 61 0xce
00:21:02.150   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.150   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.150   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.150   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.150   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.150   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.150   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.150   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.150   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.150   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=62
00:21:02.150   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq ]]
00:21:02.150   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.150   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.150   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu62/cpufreq/base_frequency ]]
00:21:02.150   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.150   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_62
00:21:02.150   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_62[@]'
00:21:02.150   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.150   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_62
00:21:02.150   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_62[@]'
00:21:02.150   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.150   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.150    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 62 0xce
00:21:02.150   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.150   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.150   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.150   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.150   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.150   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.150   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.150   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.150   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.150   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.150   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.150   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.150   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.151   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=63
00:21:02.151   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq ]]
00:21:02.151   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.151   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.151   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu63/cpufreq/base_frequency ]]
00:21:02.151   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.151   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.151   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_63
00:21:02.151   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_63[@]'
00:21:02.151   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.151   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_63
00:21:02.151   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_63[@]'
00:21:02.151   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.151   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.151    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 63 0xce
00:21:02.151   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.151   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.151   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.151   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.151   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.151   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.151   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.151   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.151   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.151   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.151   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.151   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.151   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.151   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=64
00:21:02.151   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq ]]
00:21:02.151   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.151   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.151   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu64/cpufreq/base_frequency ]]
00:21:02.151   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.151   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=2300000
00:21:02.151   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.151   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.151   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_64
00:21:02.151   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_64[@]'
00:21:02.151   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.151   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_64
00:21:02.151   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_64[@]'
00:21:02.151   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.151   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.152    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 64 0xce
00:21:02.152   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.152   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.152   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.152   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.152   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.152   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.152   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.152   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.152   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.152   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.152   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.152   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.152   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.152   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.416   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=65
00:21:02.416   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq ]]
00:21:02.416   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.416   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.416   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu65/cpufreq/base_frequency ]]
00:21:02.416   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.416   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.416   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_65
00:21:02.416   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_65[@]'
00:21:02.416   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.416   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_65
00:21:02.416   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_65[@]'
00:21:02.416   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.416   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.416    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 65 0xce
00:21:02.416   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.416   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.416   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.416   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.416   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.416   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.416   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.416   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.416   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.416   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.416   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=66
00:21:02.416   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq ]]
00:21:02.416   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.416   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.416   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu66/cpufreq/base_frequency ]]
00:21:02.416   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.416   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.416   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_66
00:21:02.416   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_66[@]'
00:21:02.416   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.416   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_66
00:21:02.416   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_66[@]'
00:21:02.416   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.416   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.416    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 66 0xce
00:21:02.416   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.416   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.416   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.416   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.416   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.416   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.416   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.416   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.416   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.416   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.416   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.416   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.416   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.416   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.417   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=67
00:21:02.417   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq ]]
00:21:02.417   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.417   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.417   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu67/cpufreq/base_frequency ]]
00:21:02.417   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.417   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.417   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.417   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.417   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_67
00:21:02.417   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_67[@]'
00:21:02.417   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.417   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_67
00:21:02.417   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_67[@]'
00:21:02.417   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.417   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.417    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 67 0xce
00:21:02.417   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.417   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.417   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.417   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.417   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.417   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.417   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.417   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.417   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.417   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.417   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.417   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.417   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.417   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.418   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=68
00:21:02.418   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq ]]
00:21:02.418   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.418   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.418   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu68/cpufreq/base_frequency ]]
00:21:02.418   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.418   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_68
00:21:02.418   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_68[@]'
00:21:02.418   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.418   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_68
00:21:02.418   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_68[@]'
00:21:02.418   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.418   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.418    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 68 0xce
00:21:02.418   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.418   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.418   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.418   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.418   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.418   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.418   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.418   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.418   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.418   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=69
00:21:02.418   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq ]]
00:21:02.418   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.418   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.418   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu69/cpufreq/base_frequency ]]
00:21:02.418   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.418   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_69
00:21:02.418   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_69[@]'
00:21:02.418   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.418   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_69
00:21:02.418   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_69[@]'
00:21:02.418   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.418   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.418    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 69 0xce
00:21:02.418   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.418   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.418   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.418   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.418   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.418   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.418   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.418   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.418   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.418   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.418   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.418   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.418   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.419   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=7
00:21:02.419   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq ]]
00:21:02.419   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.419   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.419   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu7/cpufreq/base_frequency ]]
00:21:02.419   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.419   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.419   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.419   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.419   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_7
00:21:02.419   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_7[@]'
00:21:02.419   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.419   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_7
00:21:02.419   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_7[@]'
00:21:02.419   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.419   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.419    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 7 0xce
00:21:02.419   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.419   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.419   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.419   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.419   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.419   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.419   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.419   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.419   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.419   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.419   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.419   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.419   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.419   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.419   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=70
00:21:02.419   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq ]]
00:21:02.419   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.419   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.419   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu70/cpufreq/base_frequency ]]
00:21:02.419   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000670
00:21:02.420   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.420   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_70
00:21:02.420   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_70[@]'
00:21:02.420   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.420   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_70
00:21:02.420   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_70[@]'
00:21:02.420   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.420   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.420    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 70 0xce
00:21:02.420   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.420   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.420   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.420   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.420   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.420   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.420   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.420   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.420   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.420   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=71
00:21:02.420   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq ]]
00:21:02.420   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.420   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.420   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu71/cpufreq/base_frequency ]]
00:21:02.420   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.420   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_71
00:21:02.420   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_71[@]'
00:21:02.420   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.420   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_71
00:21:02.420   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_71[@]'
00:21:02.420   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.420   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.420    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 71 0xce
00:21:02.420   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.420   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.420   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.420   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.420   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.420   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.420   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.420   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.420   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.420   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.420   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.420   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.420   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.421   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=8
00:21:02.421   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq ]]
00:21:02.421   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.421   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.421   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu8/cpufreq/base_frequency ]]
00:21:02.421   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.421   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.421   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.421   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.421   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_8
00:21:02.421   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_8[@]'
00:21:02.421   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.421   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_8
00:21:02.421   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_8[@]'
00:21:02.421   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.421   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.421    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 8 0xce
00:21:02.421   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.421   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.421   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.421   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.421   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.421   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.421   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.421   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.421   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.421   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.421   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.421   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.421   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.421   10:58:51	-- scheduler/common.sh@260 -- # for cpu in "$sysfs_cpu/cpu"+([0-9])
00:21:02.421   10:58:51	-- scheduler/common.sh@261 -- # cpu_idx=9
00:21:02.421   10:58:51	-- scheduler/common.sh@262 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq ]]
00:21:02.421   10:58:51	-- scheduler/common.sh@263 -- # cpufreq_drivers[cpu_idx]=intel_pstate
00:21:02.421   10:58:51	-- scheduler/common.sh@264 -- # cpufreq_governors[cpu_idx]=powersave
00:21:02.422   10:58:51	-- scheduler/common.sh@267 -- # [[ -e /sys/devices/system/cpu/cpu9/cpufreq/base_frequency ]]
00:21:02.422   10:58:51	-- scheduler/common.sh@268 -- # cpufreq_base_freqs[cpu_idx]=2300000
00:21:02.422   10:58:51	-- scheduler/common.sh@271 -- # cpufreq_cur_freqs[cpu_idx]=1000000
00:21:02.422   10:58:51	-- scheduler/common.sh@272 -- # cpufreq_max_freqs[cpu_idx]=2300001
00:21:02.422   10:58:51	-- scheduler/common.sh@273 -- # cpufreq_min_freqs[cpu_idx]=1000000
00:21:02.422   10:58:51	-- scheduler/common.sh@275 -- # local -n available_governors=available_governors_cpu_9
00:21:02.422   10:58:51	-- scheduler/common.sh@276 -- # cpufreq_available_governors[cpu_idx]='available_governors_cpu_9[@]'
00:21:02.422   10:58:51	-- scheduler/common.sh@277 -- # available_governors=($(< "$cpu/cpufreq/scaling_available_governors"))
00:21:02.422   10:58:51	-- scheduler/common.sh@279 -- # local -n available_freqs=available_freqs_cpu_9
00:21:02.422   10:58:51	-- scheduler/common.sh@280 -- # cpufreq_available_freqs[cpu_idx]='available_freqs_cpu_9[@]'
00:21:02.422   10:58:51	-- scheduler/common.sh@282 -- # case "${cpufreq_drivers[cpu_idx]}" in
00:21:02.422   10:58:51	-- scheduler/common.sh@293 -- # local non_turbo_ratio base_max_freq num_freq freq is_turbo=0
00:21:02.422    10:58:51	-- scheduler/common.sh@295 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/rdmsr.pl 9 0xce
00:21:02.422   10:58:51	-- scheduler/common.sh@295 -- # non_turbo_ratio=0x70a2cf3811700
00:21:02.422   10:58:51	-- scheduler/common.sh@296 -- # cpuinfo_min_freqs[cpu_idx]=1000000
00:21:02.422   10:58:51	-- scheduler/common.sh@297 -- # cpuinfo_max_freqs[cpu_idx]=3700000
00:21:02.422   10:58:51	-- scheduler/common.sh@298 -- # cpufreq_non_turbo_ratio[cpu_idx]=23
00:21:02.422   10:58:51	-- scheduler/common.sh@299 -- # (( cpufreq_base_freqs[cpu_idx] / 100000 > cpufreq_non_turbo_ratio[cpu_idx] ))
00:21:02.422   10:58:51	-- scheduler/common.sh@303 -- # cpufreq_high_prio[cpu_idx]=0
00:21:02.422   10:58:51	-- scheduler/common.sh@304 -- # base_max_freq=2300000
00:21:02.422   10:58:51	-- scheduler/common.sh@306 -- # num_freqs=14
00:21:02.422   10:58:51	-- scheduler/common.sh@307 -- # (( base_max_freq < cpuinfo_max_freqs[cpu_idx] ))
00:21:02.422   10:58:51	-- scheduler/common.sh@308 -- # (( num_freqs += 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@309 -- # cpufreq_is_turbo[cpu_idx]=1
00:21:02.422   10:58:51	-- scheduler/common.sh@313 -- # available_freqs=()
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq = 0 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@316 -- # available_freqs[freq]=2300001
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2300000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2200000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2100000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=2000000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1900000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1800000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1700000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1600000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1500000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1400000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1300000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1200000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1100000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@315 -- # (( freq == 0 && cpufreq_is_turbo[cpu_idx] == 1 ))
00:21:02.422   10:58:51	-- scheduler/common.sh@318 -- # available_freqs[freq]=1000000
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq++ ))
00:21:02.422   10:58:51	-- scheduler/common.sh@314 -- # (( freq < num_freqs ))
00:21:02.422   10:58:51	-- scheduler/common.sh@359 -- # [[ -e /sys/devices/system/cpu/cpufreq/boost ]]
00:21:02.422   10:58:51	-- scheduler/common.sh@361 -- # [[ -e /sys/devices/system/cpu/intel_pstate/no_turbo ]]
00:21:02.422   10:58:51	-- scheduler/common.sh@362 -- # turbo_enabled=1
00:21:02.422   10:58:51	-- scheduler/governor.sh@159 -- # initial_main_core_governor=powersave
00:21:02.422   10:58:51	-- scheduler/governor.sh@161 -- # verify_dpdk_governor
00:21:02.422   10:58:51	-- scheduler/governor.sh@60 -- # xtrace_disable
00:21:02.422   10:58:51	-- common/autotest_common.sh@10 -- # set +x
00:21:02.682  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:02.682  [2024-12-15 10:58:51.577574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:02.682  [2024-12-15 10:58:51.577650] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2217031 ]
00:21:02.682  EAL: No free 2048 kB hugepages reported on node 1
00:21:02.942  [2024-12-15 10:58:51.725762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:21:02.942  [2024-12-15 10:58:51.902794] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long
00:21:02.942  [2024-12-15 10:58:51.903211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:02.942  [2024-12-15 10:58:51.903292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:02.942  [2024-12-15 10:58:51.903372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:21:02.942  [2024-12-15 10:58:51.903418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:21:02.942  [2024-12-15 10:58:51.903453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:21:02.942  [2024-12-15 10:58:51.903491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:21:02.942  [2024-12-15 10:58:51.903543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:21:02.942  [2024-12-15 10:58:51.903557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:06.238  POWER: Env isn't set yet!
00:21:06.238  POWER: Attempting to initialise ACPI cpufreq power management...
00:21:06.238  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:21:06.238  POWER: Cannot set governor of lcore 1 to userspace
00:21:06.238  POWER: Attempting to initialise PSTAT power management...
00:21:06.238  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 1 power management
00:21:06.239  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 2 power management
00:21:06.239  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 3 power management
00:21:06.239  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 4 power management
00:21:06.239  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 37 power management
00:21:06.239  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 38 power management
00:21:06.239  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 39 power management
00:21:06.239  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:21:06.239  POWER: Initialized successfully for lcore 40 power management
00:21:06.239  [2024-12-15 10:58:55.019870] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:21:06.239  [2024-12-15 10:58:55.019908] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:21:06.239  [2024-12-15 10:58:55.019926] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:21:06.500  [2024-12-15 10:58:55.466017] 'OCF_Core' volume operations registered
00:21:06.500  [2024-12-15 10:58:55.470154] 'OCF_Cache' volume operations registered
00:21:06.500  [2024-12-15 10:58:55.474855] 'OCF Composite' volume operations registered
00:21:06.500  [2024-12-15 10:58:55.479042] 'SPDK_block_device' volume operations registered
00:21:07.441  Waiting for samples...
00:21:08.380  MAIN DPDK cpu1 current frequency at 2199997 KHz (1000000-2300001 KHz), set frequency 2100000 KHz < 2200000 KHz
00:21:09.047  MAIN DPDK cpu1 current frequency at 2100001 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2100000 KHz
00:21:10.161  MAIN DPDK cpu1 current frequency at 2000001 KHz (1000000-2300001 KHz), set frequency 2000000 KHz < 2000000 KHz
00:21:11.100  MAIN DPDK cpu1 current frequency at 2000000 KHz (1000000-2300001 KHz), set frequency 1800000 KHz < 2000000 KHz
00:21:12.480  MAIN DPDK cpu1 current frequency at 1800000 KHz (1000000-2300001 KHz), set frequency 1800000 KHz < 1800000 KHz
00:21:13.049  MAIN DPDK cpu1 current frequency at 1800007 KHz (1000000-2300001 KHz), set frequency 1600000 KHz < 1800000 KHz
00:21:14.427  MAIN DPDK cpu1 current frequency at 1599998 KHz (1000000-2300001 KHz), set frequency 1600000 KHz < 1600000 KHz
00:21:14.996  MAIN DPDK cpu1 current frequency at 1600001 KHz (1000000-2300001 KHz), set frequency 1400000 KHz < 1600000 KHz
00:21:16.376  MAIN DPDK cpu1 current frequency at 1400002 KHz (1000000-2300001 KHz), set frequency 1400000 KHz < 1400000 KHz
00:21:16.945  MAIN DPDK cpu1 current frequency at 1400001 KHz (1000000-2300001 KHz), set frequency 1200000 KHz < 1400000 KHz
00:21:18.324  MAIN DPDK cpu1 current frequency at 1200001 KHz (1000000-2300001 KHz), set frequency 1200000 KHz < 1200000 KHz
00:21:19.262  MAIN DPDK cpu1 current frequency at 1199996 KHz (1000000-2300001 KHz), set frequency 1000000 KHz < 1200000 KHz
00:21:19.262  Main cpu1 frequency dropped by 83%
00:21:19.262   10:59:07	-- scheduler/governor.sh@1 -- # killprocess 2217031
00:21:19.262   10:59:07	-- common/autotest_common.sh@936 -- # '[' -z 2217031 ']'
00:21:19.262   10:59:07	-- common/autotest_common.sh@940 -- # kill -0 2217031
00:21:19.262    10:59:07	-- common/autotest_common.sh@941 -- # uname
00:21:19.262   10:59:07	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:21:19.262    10:59:07	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2217031
00:21:19.262   10:59:07	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:21:19.262   10:59:07	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:21:19.262   10:59:07	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2217031'
00:21:19.262  killing process with pid 2217031
00:21:19.262   10:59:07	-- common/autotest_common.sh@955 -- # kill 2217031
00:21:19.262   10:59:07	-- common/autotest_common.sh@960 -- # wait 2217031
00:21:19.262  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:21:19.262  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:21:19.262  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:21:19.831   10:59:08	-- scheduler/governor.sh@1 -- # restore_cpufreq
00:21:19.831   10:59:08	-- scheduler/governor.sh@15 -- # local cpu
00:21:19.831   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.831   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 1 1000000 2300001
00:21:19.831   10:59:08	-- scheduler/common.sh@367 -- # local cpu=1
00:21:19.831   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.831   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.831   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq
00:21:19.831   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.831   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.831   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.831   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.831   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.831   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.831   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.831   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.831   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 1 powersave
00:21:19.831   10:59:08	-- scheduler/common.sh@395 -- # local cpu=1
00:21:19.831   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.831   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu1/cpufreq
00:21:19.831   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.831   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.831   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 0 1000000 2300001
00:21:19.831   10:59:08	-- scheduler/common.sh@367 -- # local cpu=0
00:21:19.831   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.831   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.831   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 0 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=0
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu0/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 2 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=2
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 2 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=2
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu2/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 3 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=3
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 3 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=3
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu3/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 4 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=4
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 4 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=4
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu4/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 5 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=5
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 5 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=5
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu5/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 6 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=6
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 6 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=6
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu6/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 7 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=7
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 7 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=7
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu7/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.832   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.832   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 8 1000000 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@367 -- # local cpu=8
00:21:19.832   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.832   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.832   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.832   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.832   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.832   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.832   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 8 powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@395 -- # local cpu=8
00:21:19.832   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.832   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu8/cpufreq
00:21:19.832   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.833   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.833   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 9 1000000 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@367 -- # local cpu=9
00:21:19.833   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.833   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.833   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.833   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.833   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 9 powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@395 -- # local cpu=9
00:21:19.833   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu9/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.833   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.833   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 10 1000000 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@367 -- # local cpu=10
00:21:19.833   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.833   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.833   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.833   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.833   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 10 powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@395 -- # local cpu=10
00:21:19.833   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu10/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.833   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.833   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 11 1000000 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@367 -- # local cpu=11
00:21:19.833   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.833   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.833   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.833   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.833   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 11 powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@395 -- # local cpu=11
00:21:19.833   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu11/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.833   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.833   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 12 1000000 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@367 -- # local cpu=12
00:21:19.833   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:19.833   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:19.833   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:19.833   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:19.833   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:19.833   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:19.833   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 12 powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@395 -- # local cpu=12
00:21:19.833   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:19.833   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu12/cpufreq
00:21:19.833   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:19.833   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:19.833   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 13 1000000 2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@367 -- # local cpu=13
00:21:20.097   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.097   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq
00:21:20.097   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.097   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.097   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.097   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.097   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 13 powersave
00:21:20.097   10:59:08	-- scheduler/common.sh@395 -- # local cpu=13
00:21:20.097   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.097   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu13/cpufreq
00:21:20.097   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.097   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.097   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 14 1000000 2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@367 -- # local cpu=14
00:21:20.097   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.097   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq
00:21:20.097   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.097   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.097   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.097   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.097   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 14 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=14
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu14/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 15 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=15
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 15 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=15
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu15/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 16 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=16
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 16 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=16
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu16/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 17 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=17
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 17 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=17
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu17/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 36 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=36
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 36 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=36
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu36/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=37
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=37
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=38
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=38
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=39
00:21:20.098   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.098   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.098   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.098   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.098   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.098   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@395 -- # local cpu=39
00:21:20.098   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.098   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:21:20.098   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.098   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.098   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001
00:21:20.098   10:59:08	-- scheduler/common.sh@367 -- # local cpu=40
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=40
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 41 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=41
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 41 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=41
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu41/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 42 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=42
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 42 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=42
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu42/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 43 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=43
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 43 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=43
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu43/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 44 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=44
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 44 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=44
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu44/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 45 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=45
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 45 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=45
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu45/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 46 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=46
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 46 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=46
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu46/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.099   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 47 1000000 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@367 -- # local cpu=47
00:21:20.099   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.099   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.099   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.099   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.099   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.099   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.099   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 47 powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@395 -- # local cpu=47
00:21:20.099   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.099   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu47/cpufreq
00:21:20.099   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.099   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 48 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=48
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 48 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=48
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu48/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 49 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=49
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 49 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=49
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu49/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 50 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=50
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 50 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=50
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu50/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 51 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=51
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 51 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=51
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu51/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 52 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=52
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 52 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=52
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu52/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 53 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=53
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 53 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=53
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu53/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 18 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=18
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.100   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.100   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.100   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 18 powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@395 -- # local cpu=18
00:21:20.100   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.100   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu18/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.100   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.100   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 37 1000000 2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@367 -- # local cpu=37
00:21:20.100   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.100   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.100   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:21:20.100   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.100   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 37 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=37
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu37/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 38 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=38
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 38 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=38
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu38/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 39 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=39
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 39 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=39
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu39/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 40 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=40
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 40 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=40
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu40/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 23 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=23
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 23 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=23
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu23/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 24 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=24
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 24 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=24
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu24/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 25 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=25
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 25 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=25
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu25/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.101   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 26 1000000 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@367 -- # local cpu=26
00:21:20.101   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.101   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.101   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.101   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.101   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.101   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.101   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 26 powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@395 -- # local cpu=26
00:21:20.101   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.101   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu26/cpufreq
00:21:20.101   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.101   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 27 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=27
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 27 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=27
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu27/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 28 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=28
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 28 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=28
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu28/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 29 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=29
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 29 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=29
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu29/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 30 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=30
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 30 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=30
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu30/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 31 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=31
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 31 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=31
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu31/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 32 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=32
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.102   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 32 powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@395 -- # local cpu=32
00:21:20.102   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.102   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu32/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.102   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.102   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 33 1000000 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@367 -- # local cpu=33
00:21:20.102   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.102   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq
00:21:20.102   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.102   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.102   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.102   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.102   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 33 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=33
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu33/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 34 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=34
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 34 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=34
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu34/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 35 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=35
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 35 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=35
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu35/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 54 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=54
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 54 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=54
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu54/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 55 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=55
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 55 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=55
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu55/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 56 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=56
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 56 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=56
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu56/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 57 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=57
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 57 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=57
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu57/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 58 1000000 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@367 -- # local cpu=58
00:21:20.103   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.103   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.103   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.103   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.103   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.103   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.103   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 58 powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@395 -- # local cpu=58
00:21:20.103   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.103   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu58/cpufreq
00:21:20.103   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.103   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.103   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 59 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=59
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 59 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=59
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu59/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 60 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=60
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 60 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=60
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu60/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 61 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=61
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 61 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=61
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu61/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 62 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=62
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 62 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=62
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu62/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 63 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=63
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 63 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=63
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu63/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 64 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=64
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 64 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=64
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu64/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 65 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=65
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 65 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=65
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu65/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.104   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 66 1000000 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@367 -- # local cpu=66
00:21:20.104   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.104   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.104   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.104   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.104   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.104   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.104   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 66 powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@395 -- # local cpu=66
00:21:20.104   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.104   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu66/cpufreq
00:21:20.104   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.104   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.105   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 67 1000000 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@367 -- # local cpu=67
00:21:20.105   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.105   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.105   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.105   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.105   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 67 powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@395 -- # local cpu=67
00:21:20.105   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu67/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.105   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.105   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 68 1000000 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@367 -- # local cpu=68
00:21:20.105   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.105   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.105   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.105   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.105   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 68 powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@395 -- # local cpu=68
00:21:20.105   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu68/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.105   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.105   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 69 1000000 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@367 -- # local cpu=69
00:21:20.105   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.105   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.105   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.105   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.105   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 69 powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@395 -- # local cpu=69
00:21:20.105   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu69/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.105   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.105   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 70 1000000 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@367 -- # local cpu=70
00:21:20.105   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.105   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.105   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.105   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.105   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 70 powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@395 -- # local cpu=70
00:21:20.105   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu70/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.105   10:59:08	-- scheduler/governor.sh@17 -- # for cpu in "$spdk_main_core" "${cpus[@]}"
00:21:20.105   10:59:08	-- scheduler/governor.sh@18 -- # set_cpufreq 71 1000000 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@367 -- # local cpu=71
00:21:20.105   10:59:08	-- scheduler/common.sh@368 -- # local min_freq=1000000
00:21:20.105   10:59:08	-- scheduler/common.sh@369 -- # local max_freq=2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@370 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@373 -- # [[ -n intel_pstate ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@374 -- # [[ -n 1000000 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@376 -- # case "${cpufreq_drivers[cpu]}" in
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # [[ -n 2300001 ]]
00:21:20.105   10:59:08	-- scheduler/common.sh@384 -- # (( max_freq >= min_freq ))
00:21:20.105   10:59:08	-- scheduler/common.sh@385 -- # echo 2300001
00:21:20.105   10:59:08	-- scheduler/common.sh@387 -- # (( min_freq <= cpufreq_max_freqs[cpu] ))
00:21:20.105   10:59:08	-- scheduler/common.sh@388 -- # echo 1000000
00:21:20.105   10:59:08	-- scheduler/governor.sh@19 -- # set_cpufreq_governor 71 powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@395 -- # local cpu=71
00:21:20.105   10:59:08	-- scheduler/common.sh@396 -- # local governor=powersave
00:21:20.105   10:59:08	-- scheduler/common.sh@397 -- # local cpufreq=/sys/devices/system/cpu/cpu71/cpufreq
00:21:20.105   10:59:08	-- scheduler/common.sh@399 -- # [[ powersave != \p\o\w\e\r\s\a\v\e ]]
00:21:20.105  
00:21:20.105  real	0m19.380s
00:21:20.105  user	0m45.840s
00:21:20.105  sys	0m5.410s
00:21:20.105   10:59:08	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:21:20.105   10:59:08	-- common/autotest_common.sh@10 -- # set +x
00:21:20.105  ************************************
00:21:20.105  END TEST dpdk_governor
00:21:20.105  ************************************
00:21:20.105   10:59:08	-- scheduler/scheduler.sh@17 -- # run_test interrupt_mode /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh
00:21:20.105   10:59:08	-- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']'
00:21:20.105   10:59:08	-- common/autotest_common.sh@1093 -- # xtrace_disable
00:21:20.105   10:59:08	-- common/autotest_common.sh@10 -- # set +x
00:21:20.105  ************************************
00:21:20.105  START TEST interrupt_mode
00:21:20.105  ************************************
00:21:20.105   10:59:09	-- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/interrupt.sh
00:21:20.105  * Looking for test storage...
00:21:20.105  * Found test storage at /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler
00:21:20.105    10:59:09	-- common/autotest_common.sh@1689 -- # [[ y == y ]]
00:21:20.105     10:59:09	-- common/autotest_common.sh@1690 -- # lcov --version
00:21:20.105     10:59:09	-- common/autotest_common.sh@1690 -- # awk '{print $NF}'
00:21:20.366    10:59:09	-- common/autotest_common.sh@1690 -- # lt 1.15 2
00:21:20.366    10:59:09	-- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2
00:21:20.366    10:59:09	-- scripts/common.sh@332 -- # local ver1 ver1_l
00:21:20.366    10:59:09	-- scripts/common.sh@333 -- # local ver2 ver2_l
00:21:20.366    10:59:09	-- scripts/common.sh@335 -- # IFS=.-:
00:21:20.366    10:59:09	-- scripts/common.sh@335 -- # read -ra ver1
00:21:20.366    10:59:09	-- scripts/common.sh@336 -- # IFS=.-:
00:21:20.366    10:59:09	-- scripts/common.sh@336 -- # read -ra ver2
00:21:20.366    10:59:09	-- scripts/common.sh@337 -- # local 'op=<'
00:21:20.366    10:59:09	-- scripts/common.sh@339 -- # ver1_l=2
00:21:20.366    10:59:09	-- scripts/common.sh@340 -- # ver2_l=1
00:21:20.366    10:59:09	-- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v
00:21:20.366    10:59:09	-- scripts/common.sh@343 -- # case "$op" in
00:21:20.366    10:59:09	-- scripts/common.sh@344 -- # : 1
00:21:20.366    10:59:09	-- scripts/common.sh@363 -- # (( v = 0 ))
00:21:20.366    10:59:09	-- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:21:20.366     10:59:09	-- scripts/common.sh@364 -- # decimal 1
00:21:20.366     10:59:09	-- scripts/common.sh@352 -- # local d=1
00:21:20.366     10:59:09	-- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]]
00:21:20.366     10:59:09	-- scripts/common.sh@354 -- # echo 1
00:21:20.366    10:59:09	-- scripts/common.sh@364 -- # ver1[v]=1
00:21:20.366     10:59:09	-- scripts/common.sh@365 -- # decimal 2
00:21:20.366     10:59:09	-- scripts/common.sh@352 -- # local d=2
00:21:20.366     10:59:09	-- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]]
00:21:20.366     10:59:09	-- scripts/common.sh@354 -- # echo 2
00:21:20.366    10:59:09	-- scripts/common.sh@365 -- # ver2[v]=2
00:21:20.366    10:59:09	-- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] ))
00:21:20.366    10:59:09	-- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] ))
00:21:20.366    10:59:09	-- scripts/common.sh@367 -- # return 0
00:21:20.366    10:59:09	-- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:21:20.366    10:59:09	-- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS=
00:21:20.366  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:20.366  		--rc genhtml_branch_coverage=1
00:21:20.366  		--rc genhtml_function_coverage=1
00:21:20.366  		--rc genhtml_legend=1
00:21:20.366  		--rc geninfo_all_blocks=1
00:21:20.366  		--rc geninfo_unexecuted_blocks=1
00:21:20.366  		
00:21:20.366  		'
00:21:20.366    10:59:09	-- common/autotest_common.sh@1703 -- # LCOV_OPTS='
00:21:20.366  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:20.366  		--rc genhtml_branch_coverage=1
00:21:20.366  		--rc genhtml_function_coverage=1
00:21:20.366  		--rc genhtml_legend=1
00:21:20.366  		--rc geninfo_all_blocks=1
00:21:20.366  		--rc geninfo_unexecuted_blocks=1
00:21:20.366  		
00:21:20.366  		'
00:21:20.366    10:59:09	-- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 
00:21:20.366  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:20.366  		--rc genhtml_branch_coverage=1
00:21:20.366  		--rc genhtml_function_coverage=1
00:21:20.366  		--rc genhtml_legend=1
00:21:20.366  		--rc geninfo_all_blocks=1
00:21:20.366  		--rc geninfo_unexecuted_blocks=1
00:21:20.366  		
00:21:20.366  		'
00:21:20.366    10:59:09	-- common/autotest_common.sh@1704 -- # LCOV='lcov 
00:21:20.366  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:21:20.366  		--rc genhtml_branch_coverage=1
00:21:20.366  		--rc genhtml_function_coverage=1
00:21:20.366  		--rc genhtml_legend=1
00:21:20.366  		--rc geninfo_all_blocks=1
00:21:20.366  		--rc geninfo_unexecuted_blocks=1
00:21:20.366  		
00:21:20.366  		'
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@10 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/common.sh
00:21:20.366    10:59:09	-- scheduler/common.sh@6 -- # declare -r sysfs_system=/sys/devices/system
00:21:20.366    10:59:09	-- scheduler/common.sh@7 -- # declare -r sysfs_cpu=/sys/devices/system/cpu
00:21:20.366    10:59:09	-- scheduler/common.sh@8 -- # declare -r sysfs_node=/sys/devices/system/node
00:21:20.366    10:59:09	-- scheduler/common.sh@10 -- # declare -r scheduler=/var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler
00:21:20.366    10:59:09	-- scheduler/common.sh@11 -- # declare plugin=scheduler_plugin
00:21:20.366    10:59:09	-- scheduler/common.sh@13 -- # source /var/jenkins/workspace/nvme-phy-autotest/spdk/test/scheduler/cgroups.sh
00:21:20.366     10:59:09	-- scheduler/cgroups.sh@245 -- # declare -r sysfs_cgroup=/sys/fs/cgroup
00:21:20.366      10:59:09	-- scheduler/cgroups.sh@246 -- # check_cgroup
00:21:20.366      10:59:09	-- scheduler/cgroups.sh@8 -- # [[ -e /sys/fs/cgroup/cgroup.controllers ]]
00:21:20.366      10:59:09	-- scheduler/cgroups.sh@10 -- # [[ cpuset cpu io memory hugetlb pids rdma misc == *cpuset* ]]
00:21:20.366      10:59:09	-- scheduler/cgroups.sh@10 -- # echo 2
00:21:20.366     10:59:09	-- scheduler/cgroups.sh@246 -- # cgroup_version=2
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@12 -- # trap 'killprocess "$spdk_pid"' EXIT
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@14 -- # cpus=()
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@14 -- # declare -a cpus
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@15 -- # cpus_to_collect=()
00:21:20.366   10:59:09	-- scheduler/interrupt.sh@15 -- # declare -a cpus_to_collect
00:21:20.366    10:59:09	-- scheduler/interrupt.sh@17 -- # parse_cpu_list /dev/fd/62
00:21:20.366    10:59:09	-- scheduler/common.sh@34 -- # local list=/dev/fd/62
00:21:20.366     10:59:09	-- scheduler/interrupt.sh@17 -- # echo 1,2,3,4,37,38,39,40
00:21:20.366    10:59:09	-- scheduler/common.sh@35 -- # local elem elems cpus
00:21:20.366    10:59:09	-- scheduler/common.sh@38 -- # IFS=,
00:21:20.366    10:59:09	-- scheduler/common.sh@38 -- # read -ra elems
00:21:20.366    10:59:09	-- scheduler/common.sh@40 -- # (( 8 > 0 ))
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.366    10:59:09	-- scheduler/common.sh@43 -- # [[ 1 == *-* ]]
00:21:20.366    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=1
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.366    10:59:09	-- scheduler/common.sh@43 -- # [[ 2 == *-* ]]
00:21:20.366    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=2
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.366    10:59:09	-- scheduler/common.sh@43 -- # [[ 3 == *-* ]]
00:21:20.366    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=3
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.366    10:59:09	-- scheduler/common.sh@43 -- # [[ 4 == *-* ]]
00:21:20.366    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=4
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.366    10:59:09	-- scheduler/common.sh@43 -- # [[ 37 == *-* ]]
00:21:20.366    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=37
00:21:20.366    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.367    10:59:09	-- scheduler/common.sh@43 -- # [[ 38 == *-* ]]
00:21:20.367    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=38
00:21:20.367    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.367    10:59:09	-- scheduler/common.sh@43 -- # [[ 39 == *-* ]]
00:21:20.367    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=39
00:21:20.367    10:59:09	-- scheduler/common.sh@42 -- # for elem in "${elems[@]}"
00:21:20.367    10:59:09	-- scheduler/common.sh@43 -- # [[ 40 == *-* ]]
00:21:20.367    10:59:09	-- scheduler/common.sh@49 -- # cpus[elem]=40
00:21:20.367    10:59:09	-- scheduler/common.sh@52 -- # printf '%u\n' 1 2 3 4 37 38 39 40
00:21:20.367   10:59:09	-- scheduler/interrupt.sh@17 -- # fold_list_onto_array cpus 1 2 3 4 37 38 39 40
00:21:20.367   10:59:09	-- scheduler/common.sh@16 -- # local array=cpus
00:21:20.367   10:59:09	-- scheduler/common.sh@17 -- # local elem
00:21:20.367   10:59:09	-- scheduler/common.sh@19 -- # shift
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=1'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=1
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=2'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=2
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=3'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=3
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=4'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=4
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=37'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=37
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=38'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=38
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=39'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=39
00:21:20.367   10:59:09	-- scheduler/common.sh@21 -- # for elem in "$@"
00:21:20.367   10:59:09	-- scheduler/common.sh@22 -- # eval 'cpus[elem]=40'
00:21:20.367    10:59:09	-- scheduler/common.sh@22 -- # cpus[elem]=40
00:21:20.367   10:59:09	-- scheduler/interrupt.sh@19 -- # cpus=("${cpus[@]}")
00:21:20.367   10:59:09	-- scheduler/interrupt.sh@78 -- # exec_under_dynamic_scheduler /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1
00:21:20.367   10:59:09	-- scheduler/common.sh@405 -- # [[ -e /proc//status ]]
00:21:20.367   10:59:09	-- scheduler/common.sh@409 -- # spdk_pid=2221244
00:21:20.367   10:59:09	-- scheduler/common.sh@408 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/test/event/scheduler/scheduler -m '[1,2,3,4,37,38,39,40]' --main-core 1 --wait-for-rpc
00:21:20.367   10:59:09	-- scheduler/common.sh@411 -- # waitforlisten 2221244
00:21:20.367   10:59:09	-- common/autotest_common.sh@829 -- # '[' -z 2221244 ']'
00:21:20.367   10:59:09	-- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock
00:21:20.367   10:59:09	-- common/autotest_common.sh@834 -- # local max_retries=100
00:21:20.367   10:59:09	-- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:21:20.367  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:21:20.367   10:59:09	-- common/autotest_common.sh@838 -- # xtrace_disable
00:21:20.367   10:59:09	-- common/autotest_common.sh@10 -- # set +x
00:21:20.367  [2024-12-15 10:59:09.178003] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization...
00:21:20.367  [2024-12-15 10:59:09.178076] [ DPDK EAL parameters: scheduler --no-shconf -l 1,2,3,4,37,38,39,40 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221244 ]
00:21:20.367  EAL: No free 2048 kB hugepages reported on node 1
00:21:20.367  [2024-12-15 10:59:09.274297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 8
00:21:20.626  [2024-12-15 10:59:09.385940] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2
00:21:20.626  [2024-12-15 10:59:09.386032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3
00:21:20.626  [2024-12-15 10:59:09.386051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4
00:21:20.626  [2024-12-15 10:59:09.386157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 40
00:21:20.626  [2024-12-15 10:59:09.386073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 37
00:21:20.626  [2024-12-15 10:59:09.386101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 38
00:21:20.626  [2024-12-15 10:59:09.386136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 39
00:21:20.626  [2024-12-15 10:59:09.386160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1
00:21:21.563   10:59:10	-- common/autotest_common.sh@858 -- # (( i == 0 ))
00:21:21.563   10:59:10	-- common/autotest_common.sh@862 -- # return 0
00:21:21.563   10:59:10	-- scheduler/common.sh@412 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_set_scheduler dynamic
00:21:22.132  POWER: Env isn't set yet!
00:21:22.132  POWER: Attempting to initialise ACPI cpufreq power management...
00:21:22.133  POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:21:22.133  POWER: Cannot set governor of lcore 1 to userspace
00:21:22.133  POWER: Attempting to initialise PSTAT power management...
00:21:22.133  POWER: Power management governor of lcore 1 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 1 power management
00:21:22.133  POWER: Power management governor of lcore 2 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 2 power management
00:21:22.133  POWER: Power management governor of lcore 3 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 3 power management
00:21:22.133  POWER: Power management governor of lcore 4 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 4 power management
00:21:22.133  POWER: Power management governor of lcore 37 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 37 power management
00:21:22.133  POWER: Power management governor of lcore 38 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 38 power management
00:21:22.133  POWER: Power management governor of lcore 39 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 39 power management
00:21:22.133  POWER: Power management governor of lcore 40 has been set to 'performance' successfully
00:21:22.133  POWER: Initialized successfully for lcore 40 power management
00:21:22.133  [2024-12-15 10:59:11.138831] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:21:22.133  [2024-12-15 10:59:11.138871] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:21:22.133  [2024-12-15 10:59:11.138888] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:21:22.392   10:59:11	-- scheduler/common.sh@413 -- # /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:21:22.650  [2024-12-15 10:59:11.570195] 'OCF_Core' volume operations registered
00:21:22.650  [2024-12-15 10:59:11.573642] 'OCF_Cache' volume operations registered
00:21:22.650  [2024-12-15 10:59:11.577519] 'OCF Composite' volume operations registered
00:21:22.650  [2024-12-15 10:59:11.580980] 'SPDK_block_device' volume operations registered
00:21:22.651  [2024-12-15 10:59:11.582040] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@80 -- # interrupt
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@22 -- # local busy_cpus
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@23 -- # local cpu thread
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@25 -- # local reactor_framework
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@27 -- # cpus_to_collect=("${cpus[@]}")
00:21:22.651   10:59:11	-- scheduler/interrupt.sh@28 -- # collect_cpu_idle
00:21:22.651   10:59:11	-- scheduler/common.sh@626 -- # (( 8 > 0 ))
00:21:22.651   10:59:11	-- scheduler/common.sh@628 -- # local time=5
00:21:22.651   10:59:11	-- scheduler/common.sh@629 -- # local cpu
00:21:22.651   10:59:11	-- scheduler/common.sh@630 -- # local samples
00:21:22.651   10:59:11	-- scheduler/common.sh@631 -- # is_idle=()
00:21:22.651   10:59:11	-- scheduler/common.sh@631 -- # local -g is_idle
00:21:22.651   10:59:11	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' '1 2 3 4 37 38 39 40' 5
00:21:22.651  Collecting cpu idle stats (cpus: 1 2 3 4 37 38 39 40) for 5 seconds...
00:21:22.651   10:59:11	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 1 2 3 4 37 38 39 40
00:21:22.651   10:59:11	-- scheduler/common.sh@483 -- # xtrace_disable
00:21:22.651   10:59:11	-- common/autotest_common.sh@10 -- # set +x
00:21:29.222   10:59:17	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:21:29.222   10:59:17	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:29.222   10:59:17	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:29.222    10:59:17	-- scheduler/common.sh@641 -- # calc_median 0 0 0 0 0
00:21:29.222    10:59:17	-- scheduler/common.sh@727 -- # samples=('0' '0' '0' '0' '0')
00:21:29.222    10:59:17	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:29.222    10:59:17	-- scheduler/common.sh@728 -- # local middle median sample
00:21:29.222    10:59:17	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:29.222     10:59:17	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 0 0 0
00:21:29.222     10:59:17	-- scheduler/common.sh@730 -- # sort -n
00:21:29.222    10:59:17	-- scheduler/common.sh@732 -- # middle=2
00:21:29.222    10:59:17	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:29.222    10:59:17	-- scheduler/common.sh@736 -- # median=0
00:21:29.222    10:59:17	-- scheduler/common.sh@739 -- # echo 0
00:21:29.222   10:59:17	-- scheduler/common.sh@641 -- # load_median=0
00:21:29.222   10:59:17	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 1 '0 0 0 0 0' 0 0
00:21:29.222  * cpu1 idle samples: 0 0 0 0 0 (avg: 0%, median: 0%)
00:21:29.222    10:59:17	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 1 user
00:21:29.222    10:59:17	-- scheduler/common.sh@678 -- # local cpu=1 time=user
00:21:29.222    10:59:17	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:29.222    10:59:17	-- scheduler/common.sh@682 -- # [[ -v raw_samples_1 ]]
00:21:29.222    10:59:17	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_1
00:21:29.222    10:59:17	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:29.222    10:59:17	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:29.222    10:59:17	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:29.222    10:59:17	-- scheduler/common.sh@690 -- # case "$time" in
00:21:29.222    10:59:17	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:29.222     10:59:17	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:29.222    10:59:17	-- scheduler/common.sh@697 -- # usage=101
00:21:29.222    10:59:17	-- scheduler/common.sh@698 -- # usage=100
00:21:29.222    10:59:17	-- scheduler/common.sh@700 -- # printf %u 100
00:21:29.222    10:59:17	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 1 user 100
00:21:29.222  * cpu1 user usage: 100
00:21:29.222    10:59:17	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 1 '283549 283650 283751 283851 283952'
00:21:29.222  * cpu1 user samples: 283549 283650 283751 283851 283952
00:21:29.222    10:59:17	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 1 '61 61 61 61 61'
00:21:29.222  * cpu1 nice samples: 61 61 61 61 61
00:21:29.222    10:59:17	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 1 '13951 13951 13951 13951 13952'
00:21:29.222  * cpu1 system samples: 13951 13951 13951 13951 13952
00:21:29.222   10:59:17	-- scheduler/common.sh@652 -- # user_load=100
00:21:29.222   10:59:17	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:29.222   10:59:17	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:21:29.222   10:59:17	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 1
00:21:29.222  * cpu1 is not idle
00:21:29.222   10:59:17	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:21:29.222    10:59:17	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 1
00:21:29.222    10:59:17	-- scheduler/common.sh@747 -- # xtrace_disable
00:21:29.222    10:59:17	-- common/autotest_common.sh@10 -- # set +x
00:21:33.419  stime samples: 0 0 1 0
00:21:33.420  utime samples: 0 100 100 99
00:21:33.420   10:59:21	-- scheduler/common.sh@666 -- # user_spdk_load=99
00:21:33.420   10:59:21	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:21:33.420   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.420   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.420    10:59:21	-- scheduler/common.sh@641 -- # calc_median 95 100 100 100 100
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # samples=('95' '100' '100' '100' '100')
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.420    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.420    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 95 100 100 100 100
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.420    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.420    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.420    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.420    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.420   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.420   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '95 100 100 100 100' 99 100
00:21:33.420  * cpu2 idle samples: 95 100 100 100 100 (avg: 99%, median: 100%)
00:21:33.420    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:21:33.420    10:59:21	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:21:33.420    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.420    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:21:33.420    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:21:33.420    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.420    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.420     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.420    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.420    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 0
00:21:33.420  * cpu2 user usage: 0
00:21:33.420    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '158779 158779 158779 158779 158779'
00:21:33.420  * cpu2 user samples: 158779 158779 158779 158779 158779
00:21:33.420    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:21:33.420  * cpu2 nice samples: 0 0 0 0 0
00:21:33.420    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '11969 11969 11969 11969 11969'
00:21:33.420  * cpu2 system samples: 11969 11969 11969 11969 11969
00:21:33.420   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.420   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.420   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 2
00:21:33.420  * cpu2 is idle
00:21:33.420   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.420   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.420   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.420    10:59:21	-- scheduler/common.sh@641 -- # calc_median 99 100 100 100 100
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # samples=('99' '100' '100' '100' '100')
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.420    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.420    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 99 100 100 100 100
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.420    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.420    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.420    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.420    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.420   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.420   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '99 100 100 100 100' 99 100
00:21:33.420  * cpu3 idle samples: 99 100 100 100 100 (avg: 99%, median: 100%)
00:21:33.420    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:21:33.420    10:59:21	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:21:33.420    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.420    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:21:33.420    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:21:33.420    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.420    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.420     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.420    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.420    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 0
00:21:33.420  * cpu3 user usage: 0
00:21:33.420    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '132172 132172 132172 132172 132172'
00:21:33.420  * cpu3 user samples: 132172 132172 132172 132172 132172
00:21:33.420    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '0 0 0 0 0'
00:21:33.420  * cpu3 nice samples: 0 0 0 0 0
00:21:33.420    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '10407 10407 10407 10407 10407'
00:21:33.420  * cpu3 system samples: 10407 10407 10407 10407 10407
00:21:33.420   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.420   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.420   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 3
00:21:33.420  * cpu3 is idle
00:21:33.420   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.420   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.420   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.420    10:59:21	-- scheduler/common.sh@641 -- # calc_median 99 99 100 100 100
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # samples=('99' '99' '100' '100' '100')
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.420    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.420    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 99 99 100 100 100
00:21:33.420     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.420    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.420    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.420    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.420    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.420   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.420   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '99 99 100 100 100' 99 100
00:21:33.420  * cpu4 idle samples: 99 99 100 100 100 (avg: 99%, median: 100%)
00:21:33.420    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:21:33.420    10:59:21	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:21:33.420    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.420    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:21:33.420    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:21:33.420    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.420    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.420    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.420     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.420     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.420     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.420    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.420    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.420    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 0
00:21:33.420  * cpu4 user usage: 0
00:21:33.420    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '63353 63354 63354 63354 63354'
00:21:33.420  * cpu4 user samples: 63353 63354 63354 63354 63354
00:21:33.420    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:21:33.420  * cpu4 nice samples: 0 0 0 0 0
00:21:33.420    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '10904 10904 10904 10904 10904'
00:21:33.420  * cpu4 system samples: 10904 10904 10904 10904 10904
00:21:33.420   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.420   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.420   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 4
00:21:33.420  * cpu4 is idle
00:21:33.420   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.420   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.420   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.420    10:59:21	-- scheduler/common.sh@641 -- # calc_median 100 100 100 100 100
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # samples=('100' '100' '100' '100' '100')
00:21:33.420    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.420    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.421    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 100 100 100
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.421    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.421    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.421    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.421    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.421   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.421   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 37 '100 100 100 100 100' 100 100
00:21:33.421  * cpu37 idle samples: 100 100 100 100 100 (avg: 100%, median: 100%)
00:21:33.421    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 37 user
00:21:33.421    10:59:21	-- scheduler/common.sh@678 -- # local cpu=37 time=user
00:21:33.421    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.421    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_37 ]]
00:21:33.421    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_37
00:21:33.421    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.421    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.421     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.421     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.421     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.421     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.421     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.421    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.421    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.421    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.421    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 37 user 0
00:21:33.421  * cpu37 user usage: 0
00:21:33.421    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 37 '25940 25940 25940 25940 25940'
00:21:33.421  * cpu37 user samples: 25940 25940 25940 25940 25940
00:21:33.421    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 37 '6 6 6 6 6'
00:21:33.421  * cpu37 nice samples: 6 6 6 6 6
00:21:33.421    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 37 '3732 3732 3732 3732 3732'
00:21:33.421  * cpu37 system samples: 3732 3732 3732 3732 3732
00:21:33.421   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.421   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.421   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 37
00:21:33.421  * cpu37 is idle
00:21:33.421   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.421   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.421   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.421    10:59:21	-- scheduler/common.sh@641 -- # calc_median 99 100 100 98 100
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # samples=('99' '100' '100' '98' '100')
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.421    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.421    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 99 100 100 98 100
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.421    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.421    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.421    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.421    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.421   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.421   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 38 '99 100 100 98 100' 99 100
00:21:33.421  * cpu38 idle samples: 99 100 100 98 100 (avg: 99%, median: 100%)
00:21:33.421    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 38 user
00:21:33.421    10:59:21	-- scheduler/common.sh@678 -- # local cpu=38 time=user
00:21:33.421    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.421    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_38 ]]
00:21:33.421    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_38
00:21:33.421    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.421    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.421     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.421     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.421     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.421     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.421     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.421    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.421    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.421    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.421    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 38 user 0
00:21:33.421  * cpu38 user usage: 0
00:21:33.421    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 38 '33429 33429 33429 33430 33430'
00:21:33.421  * cpu38 user samples: 33429 33429 33429 33430 33430
00:21:33.421    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 38 '26 26 26 26 26'
00:21:33.421  * cpu38 nice samples: 26 26 26 26 26
00:21:33.421    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 38 '5283 5283 5283 5284 5284'
00:21:33.421  * cpu38 system samples: 5283 5283 5283 5284 5284
00:21:33.421   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.421   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.421   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 38
00:21:33.421  * cpu38 is idle
00:21:33.421   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.421   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.421   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.421    10:59:21	-- scheduler/common.sh@641 -- # calc_median 100 99 100 100 99
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # samples=('100' '99' '100' '100' '99')
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.421    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.421    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 100 99 100 100 99
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.421    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.421    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.421    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.421    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.421   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.421   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 39 '100 99 100 100 99' 99 100
00:21:33.421  * cpu39 idle samples: 100 99 100 100 99 (avg: 99%, median: 100%)
00:21:33.421    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 39 user
00:21:33.421    10:59:21	-- scheduler/common.sh@678 -- # local cpu=39 time=user
00:21:33.421    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.421    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_39 ]]
00:21:33.421    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_39
00:21:33.421    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.421    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.421    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.421     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.421    10:59:21	-- scheduler/common.sh@697 -- # usage=1
00:21:33.421    10:59:21	-- scheduler/common.sh@698 -- # usage=1
00:21:33.421    10:59:21	-- scheduler/common.sh@700 -- # printf %u 1
00:21:33.421    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 39 user 1
00:21:33.421  * cpu39 user usage: 1
00:21:33.421    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 39 '33044 33044 33044 33044 33045'
00:21:33.421  * cpu39 user samples: 33044 33044 33044 33044 33045
00:21:33.421    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 39 '0 0 0 0 0'
00:21:33.421  * cpu39 nice samples: 0 0 0 0 0
00:21:33.421    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 39 '5629 5630 5630 5630 5630'
00:21:33.421  * cpu39 system samples: 5629 5630 5630 5630 5630
00:21:33.421   10:59:21	-- scheduler/common.sh@652 -- # user_load=1
00:21:33.421   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.421   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 39
00:21:33.421  * cpu39 is idle
00:21:33.421   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.421   10:59:21	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:33.421   10:59:21	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:33.421    10:59:21	-- scheduler/common.sh@641 -- # calc_median 100 100 99 100 100
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # samples=('100' '100' '99' '100' '100')
00:21:33.421    10:59:21	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:33.421    10:59:21	-- scheduler/common.sh@728 -- # local middle median sample
00:21:33.421    10:59:21	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # printf '%s\n' 100 100 99 100 100
00:21:33.421     10:59:21	-- scheduler/common.sh@730 -- # sort -n
00:21:33.421    10:59:21	-- scheduler/common.sh@732 -- # middle=2
00:21:33.421    10:59:21	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:33.421    10:59:21	-- scheduler/common.sh@736 -- # median=100
00:21:33.421    10:59:21	-- scheduler/common.sh@739 -- # echo 100
00:21:33.421   10:59:21	-- scheduler/common.sh@641 -- # load_median=100
00:21:33.421   10:59:21	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 40 '100 100 99 100 100' 99 100
00:21:33.421  * cpu40 idle samples: 100 100 99 100 100 (avg: 99%, median: 100%)
00:21:33.421    10:59:21	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 40 user
00:21:33.421    10:59:21	-- scheduler/common.sh@678 -- # local cpu=40 time=user
00:21:33.421    10:59:21	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:33.421    10:59:21	-- scheduler/common.sh@682 -- # [[ -v raw_samples_40 ]]
00:21:33.421    10:59:21	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_40
00:21:33.421    10:59:21	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:33.422    10:59:21	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:33.422    10:59:21	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:33.422    10:59:21	-- scheduler/common.sh@690 -- # case "$time" in
00:21:33.422    10:59:21	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:33.422     10:59:21	-- scheduler/common.sh@691 -- # trap - ERR
00:21:33.422     10:59:21	-- scheduler/common.sh@691 -- # print_backtrace
00:21:33.422     10:59:21	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:21:33.422     10:59:21	-- common/autotest_common.sh@1142 -- # return 0
00:21:33.422     10:59:21	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:33.422    10:59:21	-- scheduler/common.sh@697 -- # usage=0
00:21:33.422    10:59:21	-- scheduler/common.sh@698 -- # usage=0
00:21:33.422    10:59:21	-- scheduler/common.sh@700 -- # printf %u 0
00:21:33.422    10:59:21	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 40 user 0
00:21:33.422  * cpu40 user usage: 0
00:21:33.422    10:59:21	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 40 '31722 31722 31722 31722 31722'
00:21:33.422  * cpu40 user samples: 31722 31722 31722 31722 31722
00:21:33.422    10:59:21	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 40 '2720 2720 2720 2720 2720'
00:21:33.422  * cpu40 nice samples: 2720 2720 2720 2720 2720
00:21:33.422    10:59:21	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 40 '6045 6045 6046 6046 6046'
00:21:33.422  * cpu40 system samples: 6045 6045 6046 6046 6046
00:21:33.422   10:59:21	-- scheduler/common.sh@652 -- # user_load=0
00:21:33.422   10:59:21	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:33.422   10:59:21	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 40
00:21:33.422  * cpu40 is idle
00:21:33.422   10:59:21	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@31 -- # rpc_cmd framework_get_reactors
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@31 -- # jq -r '.reactors[]'
00:21:33.422    10:59:21	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:33.422    10:59:21	-- common/autotest_common.sh@10 -- # set +x
00:21:33.422    10:59:21	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@31 -- # reactor_framework='{
00:21:33.422    "lcore": 1,
00:21:33.422    "busy": 686778212,
00:21:33.422    "idle": 27823867176,
00:21:33.422    "in_interrupt": false,
00:21:33.422    "core_freq": 1200,
00:21:33.422    "lw_threads": [
00:21:33.422      {
00:21:33.422        "name": "app_thread",
00:21:33.422        "id": 1,
00:21:33.422        "cpumask": "2",
00:21:33.422        "elapsed": 28531020904
00:21:33.422      }
00:21:33.422    ]
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 2,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4029935692,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 3,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4030259948,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 4,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4030686560,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 37,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4031003094,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 38,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4031261042,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 39,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4031517102,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }
00:21:33.422  {
00:21:33.422    "lcore": 40,
00:21:33.422    "busy": 0,
00:21:33.422    "idle": 4031948184,
00:21:33.422    "in_interrupt": true,
00:21:33.422    "core_freq": 2300,
00:21:33.422    "lw_threads": []
00:21:33.422  }'
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id'
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id'
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id'
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 37) | .lw_threads[].id'
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:21	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:21	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 38) | .lw_threads[].id'
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:22	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 39) | .lw_threads[].id'
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@32 -- # for cpu in "${cpus[@]:1}"
00:21:33.422    10:59:22	-- scheduler/interrupt.sh@33 -- # jq -r 'select(.lcore == 40) | .lw_threads[].id'
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@33 -- # [[ -z '' ]]
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@41 -- # (( is_idle[cpu] == 0 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@39 -- # for cpu in "${!is_idle[@]}"
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@40 -- # (( cpu == spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@43 -- # (( cpu != spdk_main_core ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@44 -- # (( is_idle[cpu] == 1 ))
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@49 -- # busy_cpus=("${cpus[@]:1:3}")
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@49 -- # threads=()
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:21:33.422     10:59:22	-- scheduler/interrupt.sh@54 -- # mask_cpus 2
00:21:33.422      10:59:22	-- scheduler/common.sh@166 -- # fold_array_onto_string 2
00:21:33.422      10:59:22	-- scheduler/common.sh@27 -- # cpus=('2')
00:21:33.422      10:59:22	-- scheduler/common.sh@27 -- # local cpus
00:21:33.422      10:59:22	-- scheduler/common.sh@29 -- # local IFS=,
00:21:33.422      10:59:22	-- scheduler/common.sh@30 -- # echo 2
00:21:33.422     10:59:22	-- scheduler/common.sh@166 -- # printf '[%s]\n' 2
00:21:33.422    10:59:22	-- scheduler/interrupt.sh@54 -- # create_thread -n thread2 -m '[2]' -a 100
00:21:33.422    10:59:22	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread2 -m '[2]' -a 100
00:21:33.422    10:59:22	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:33.422    10:59:22	-- common/autotest_common.sh@10 -- # set +x
00:21:33.422    10:59:22	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@54 -- # threads[cpu]=2
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:21:33.422   10:59:22	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:21:33.422   10:59:22	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:21:33.422   10:59:22	-- scheduler/common.sh@628 -- # local time=5
00:21:33.422   10:59:22	-- scheduler/common.sh@629 -- # local cpu
00:21:33.422   10:59:22	-- scheduler/common.sh@630 -- # local samples
00:21:33.422   10:59:22	-- scheduler/common.sh@631 -- # is_idle=()
00:21:33.422   10:59:22	-- scheduler/common.sh@631 -- # local -g is_idle
00:21:33.422   10:59:22	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5
00:21:33.422  Collecting cpu idle stats (cpus: 2) for 5 seconds...
00:21:33.422   10:59:22	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 2
00:21:33.422   10:59:22	-- scheduler/common.sh@483 -- # xtrace_disable
00:21:33.422   10:59:22	-- common/autotest_common.sh@10 -- # set +x
00:21:39.997   10:59:28	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:21:39.997   10:59:28	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:39.997   10:59:28	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:39.997    10:59:28	-- scheduler/common.sh@641 -- # calc_median 100 18 0 0 0
00:21:39.997    10:59:28	-- scheduler/common.sh@727 -- # samples=('100' '18' '0' '0' '0')
00:21:39.997    10:59:28	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:39.997    10:59:28	-- scheduler/common.sh@728 -- # local middle median sample
00:21:39.997    10:59:28	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:39.997     10:59:28	-- scheduler/common.sh@730 -- # printf '%s\n' 100 18 0 0 0
00:21:39.997     10:59:28	-- scheduler/common.sh@730 -- # sort -n
00:21:39.997    10:59:28	-- scheduler/common.sh@732 -- # middle=2
00:21:39.997    10:59:28	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:39.997    10:59:28	-- scheduler/common.sh@736 -- # median=0
00:21:39.997    10:59:28	-- scheduler/common.sh@739 -- # echo 0
00:21:39.997   10:59:28	-- scheduler/common.sh@641 -- # load_median=0
00:21:39.997   10:59:28	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '100 18 0 0 0' 23 0
00:21:39.997  * cpu2 idle samples: 100 18 0 0 0 (avg: 23%, median: 0%)
00:21:39.997    10:59:28	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:21:39.997    10:59:28	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:21:39.997    10:59:28	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:39.997    10:59:28	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:21:39.997    10:59:28	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:21:39.997    10:59:28	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:39.998    10:59:28	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:39.998    10:59:28	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:39.998    10:59:28	-- scheduler/common.sh@690 -- # case "$time" in
00:21:39.998    10:59:28	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:39.998     10:59:28	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:39.998    10:59:28	-- scheduler/common.sh@697 -- # usage=101
00:21:39.998    10:59:28	-- scheduler/common.sh@698 -- # usage=100
00:21:39.998    10:59:28	-- scheduler/common.sh@700 -- # printf %u 100
00:21:39.998    10:59:28	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 100
00:21:39.998  * cpu2 user usage: 100
00:21:39.998    10:59:28	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '158779 158861 158962 159062 159163'
00:21:39.998  * cpu2 user samples: 158779 158861 158962 159062 159163
00:21:39.998    10:59:28	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:21:39.998  * cpu2 nice samples: 0 0 0 0 0
00:21:39.998    10:59:28	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '11970 11970 11970 11970 11970'
00:21:39.998  * cpu2 system samples: 11970 11970 11970 11970 11970
00:21:39.998   10:59:28	-- scheduler/common.sh@652 -- # user_load=100
00:21:39.998   10:59:28	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:39.998   10:59:28	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:21:39.998   10:59:28	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 2
00:21:39.998  * cpu2 is not idle
00:21:39.998   10:59:28	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:21:39.998    10:59:28	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 2
00:21:39.998    10:59:28	-- scheduler/common.sh@747 -- # xtrace_disable
00:21:39.998    10:59:28	-- common/autotest_common.sh@10 -- # set +x
00:21:43.289  stime samples: 0 0 0 0
00:21:43.289  utime samples: 0 100 100 100
00:21:43.289   10:59:32	-- scheduler/common.sh@666 -- # user_spdk_load=100
00:21:43.289   10:59:32	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:21:43.289    10:59:32	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:21:43.289    10:59:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:43.289    10:59:32	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:21:43.289    10:59:32	-- common/autotest_common.sh@10 -- # set +x
00:21:43.289    10:59:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:43.289   10:59:32	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:21:43.289    "lcore": 1,
00:21:43.289    "busy": 3932031456,
00:21:43.289    "idle": 48393122476,
00:21:43.289    "in_interrupt": false,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": [
00:21:43.289      {
00:21:43.289        "name": "app_thread",
00:21:43.289        "id": 1,
00:21:43.289        "cpumask": "2",
00:21:43.289        "elapsed": 52345537810
00:21:43.289      }
00:21:43.289    ]
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 2,
00:21:43.289    "busy": 19783489022,
00:21:43.289    "idle": 4720093956,
00:21:43.289    "in_interrupt": false,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": [
00:21:43.289      {
00:21:43.289        "name": "thread2",
00:21:43.289        "id": 2,
00:21:43.289        "cpumask": "4",
00:21:43.289        "elapsed": 19726854850
00:21:43.289      }
00:21:43.289    ]
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 3,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4030259948,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 4,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4030686560,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 37,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4031003094,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 38,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4031261042,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 39,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4031517102,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }
00:21:43.289  {
00:21:43.289    "lcore": 40,
00:21:43.289    "busy": 0,
00:21:43.289    "idle": 4031948184,
00:21:43.289    "in_interrupt": true,
00:21:43.289    "core_freq": 2300,
00:21:43.289    "lw_threads": []
00:21:43.289  }'
00:21:43.289    10:59:32	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 2) | .lw_threads[] | select(.name == "thread2")'
00:21:43.289   10:59:32	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:21:43.289    "name": "thread2",
00:21:43.289    "id": 2,
00:21:43.289    "cpumask": "4",
00:21:43.289    "elapsed": 19726854850
00:21:43.289  } ]]
00:21:43.289   10:59:32	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:21:43.289   10:59:32	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:21:43.289     10:59:32	-- scheduler/interrupt.sh@54 -- # mask_cpus 3
00:21:43.289      10:59:32	-- scheduler/common.sh@166 -- # fold_array_onto_string 3
00:21:43.289      10:59:32	-- scheduler/common.sh@27 -- # cpus=('3')
00:21:43.289      10:59:32	-- scheduler/common.sh@27 -- # local cpus
00:21:43.289      10:59:32	-- scheduler/common.sh@29 -- # local IFS=,
00:21:43.289      10:59:32	-- scheduler/common.sh@30 -- # echo 3
00:21:43.289     10:59:32	-- scheduler/common.sh@166 -- # printf '[%s]\n' 3
00:21:43.289    10:59:32	-- scheduler/interrupt.sh@54 -- # create_thread -n thread3 -m '[3]' -a 100
00:21:43.289    10:59:32	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread3 -m '[3]' -a 100
00:21:43.289    10:59:32	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:43.289    10:59:32	-- common/autotest_common.sh@10 -- # set +x
00:21:43.547    10:59:32	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:43.547   10:59:32	-- scheduler/interrupt.sh@54 -- # threads[cpu]=3
00:21:43.547   10:59:32	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:21:43.547   10:59:32	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:21:43.547   10:59:32	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:21:43.547   10:59:32	-- scheduler/common.sh@628 -- # local time=5
00:21:43.547   10:59:32	-- scheduler/common.sh@629 -- # local cpu
00:21:43.547   10:59:32	-- scheduler/common.sh@630 -- # local samples
00:21:43.547   10:59:32	-- scheduler/common.sh@631 -- # is_idle=()
00:21:43.547   10:59:32	-- scheduler/common.sh@631 -- # local -g is_idle
00:21:43.547   10:59:32	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5
00:21:43.547  Collecting cpu idle stats (cpus: 3) for 5 seconds...
00:21:43.547   10:59:32	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 3
00:21:43.547   10:59:32	-- scheduler/common.sh@483 -- # xtrace_disable
00:21:43.547   10:59:32	-- common/autotest_common.sh@10 -- # set +x
00:21:50.115   10:59:38	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:21:50.115   10:59:38	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:21:50.115   10:59:38	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:21:50.115    10:59:38	-- scheduler/common.sh@641 -- # calc_median 77 0 0 0 0
00:21:50.115    10:59:38	-- scheduler/common.sh@727 -- # samples=('77' '0' '0' '0' '0')
00:21:50.115    10:59:38	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:21:50.115    10:59:38	-- scheduler/common.sh@728 -- # local middle median sample
00:21:50.115    10:59:38	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:21:50.115     10:59:38	-- scheduler/common.sh@730 -- # printf '%s\n' 77 0 0 0 0
00:21:50.115     10:59:38	-- scheduler/common.sh@730 -- # sort -n
00:21:50.115    10:59:38	-- scheduler/common.sh@732 -- # middle=2
00:21:50.115    10:59:38	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:21:50.115    10:59:38	-- scheduler/common.sh@736 -- # median=0
00:21:50.115    10:59:38	-- scheduler/common.sh@739 -- # echo 0
00:21:50.115   10:59:38	-- scheduler/common.sh@641 -- # load_median=0
00:21:50.115   10:59:38	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '77 0 0 0 0' 15 0
00:21:50.115  * cpu3 idle samples: 77 0 0 0 0 (avg: 15%, median: 0%)
00:21:50.115    10:59:38	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:21:50.115    10:59:38	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:21:50.115    10:59:38	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:21:50.115    10:59:38	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:21:50.115    10:59:38	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:21:50.115    10:59:38	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:21:50.115    10:59:38	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:21:50.115    10:59:38	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:21:50.115    10:59:38	-- scheduler/common.sh@690 -- # case "$time" in
00:21:50.115    10:59:38	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:21:50.115     10:59:38	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:21:50.115    10:59:38	-- scheduler/common.sh@697 -- # usage=100
00:21:50.115    10:59:38	-- scheduler/common.sh@698 -- # usage=100
00:21:50.115    10:59:38	-- scheduler/common.sh@700 -- # printf %u 100
00:21:50.115    10:59:38	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 100
00:21:50.115  * cpu3 user usage: 100
00:21:50.115    10:59:38	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '132195 132296 132396 132497 132597'
00:21:50.115  * cpu3 user samples: 132195 132296 132396 132497 132597
00:21:50.115    10:59:38	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '0 0 0 0 0'
00:21:50.115  * cpu3 nice samples: 0 0 0 0 0
00:21:50.115    10:59:38	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '10410 10410 10410 10410 10410'
00:21:50.115  * cpu3 system samples: 10410 10410 10410 10410 10410
00:21:50.115   10:59:38	-- scheduler/common.sh@652 -- # user_load=100
00:21:50.115   10:59:38	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:21:50.115   10:59:38	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:21:50.115   10:59:38	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 3
00:21:50.115  * cpu3 is not idle
00:21:50.115   10:59:38	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:21:50.115    10:59:38	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 3
00:21:50.115    10:59:38	-- scheduler/common.sh@747 -- # xtrace_disable
00:21:50.115    10:59:38	-- common/autotest_common.sh@10 -- # set +x
00:21:54.304  stime samples: 0 0 0 0
00:21:54.304  utime samples: 0 100 100 100
00:21:54.304   10:59:42	-- scheduler/common.sh@666 -- # user_spdk_load=100
00:21:54.304   10:59:42	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:21:54.304    10:59:42	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:21:54.304    10:59:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:54.304    10:59:42	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:21:54.304    10:59:42	-- common/autotest_common.sh@10 -- # set +x
00:21:54.304    10:59:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:21:54.304    "lcore": 1,
00:21:54.304    "busy": 3959397574,
00:21:54.304    "idle": 72085928010,
00:21:54.304    "in_interrupt": false,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": [
00:21:54.304      {
00:21:54.304        "name": "app_thread",
00:21:54.304        "id": 1,
00:21:54.304        "cpumask": "2",
00:21:54.304        "elapsed": 76065712172
00:21:54.304      }
00:21:54.304    ]
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 2,
00:21:54.304    "busy": 43478047612,
00:21:54.304    "idle": 4720093956,
00:21:54.304    "in_interrupt": false,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": [
00:21:54.304      {
00:21:54.304        "name": "thread2",
00:21:54.304        "id": 2,
00:21:54.304        "cpumask": "4",
00:21:54.304        "elapsed": 43447029212
00:21:54.304      }
00:21:54.304    ]
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 3,
00:21:54.304    "busy": 20703719460,
00:21:54.304    "idle": 4949812896,
00:21:54.304    "in_interrupt": false,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": [
00:21:54.304      {
00:21:54.304        "name": "thread3",
00:21:54.304        "id": 3,
00:21:54.304        "cpumask": "8",
00:21:54.304        "elapsed": 20442611250
00:21:54.304      }
00:21:54.304    ]
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 4,
00:21:54.304    "busy": 0,
00:21:54.304    "idle": 4030686560,
00:21:54.304    "in_interrupt": true,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": []
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 37,
00:21:54.304    "busy": 0,
00:21:54.304    "idle": 4031003094,
00:21:54.304    "in_interrupt": true,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": []
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 38,
00:21:54.304    "busy": 0,
00:21:54.304    "idle": 4031261042,
00:21:54.304    "in_interrupt": true,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": []
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 39,
00:21:54.304    "busy": 0,
00:21:54.304    "idle": 4031517102,
00:21:54.304    "in_interrupt": true,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": []
00:21:54.304  }
00:21:54.304  {
00:21:54.304    "lcore": 40,
00:21:54.304    "busy": 0,
00:21:54.304    "idle": 4031948184,
00:21:54.304    "in_interrupt": true,
00:21:54.304    "core_freq": 2300,
00:21:54.304    "lw_threads": []
00:21:54.304  }'
00:21:54.304    10:59:42	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 3) | .lw_threads[] | select(.name == "thread3")'
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:21:54.304    "name": "thread3",
00:21:54.304    "id": 3,
00:21:54.304    "cpumask": "8",
00:21:54.304    "elapsed": 20442611250
00:21:54.304  } ]]
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@53 -- # for cpu in "${busy_cpus[@]}"
00:21:54.304     10:59:42	-- scheduler/interrupt.sh@54 -- # mask_cpus 4
00:21:54.304      10:59:42	-- scheduler/common.sh@166 -- # fold_array_onto_string 4
00:21:54.304      10:59:42	-- scheduler/common.sh@27 -- # cpus=('4')
00:21:54.304      10:59:42	-- scheduler/common.sh@27 -- # local cpus
00:21:54.304      10:59:42	-- scheduler/common.sh@29 -- # local IFS=,
00:21:54.304      10:59:42	-- scheduler/common.sh@30 -- # echo 4
00:21:54.304     10:59:42	-- scheduler/common.sh@166 -- # printf '[%s]\n' 4
00:21:54.304    10:59:42	-- scheduler/interrupt.sh@54 -- # create_thread -n thread4 -m '[4]' -a 100
00:21:54.304    10:59:42	-- scheduler/common.sh@471 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n thread4 -m '[4]' -a 100
00:21:54.304    10:59:42	-- common/autotest_common.sh@561 -- # xtrace_disable
00:21:54.304    10:59:42	-- common/autotest_common.sh@10 -- # set +x
00:21:54.304    10:59:42	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@54 -- # threads[cpu]=4
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@54 -- # cpus_to_collect=("$cpu")
00:21:54.304   10:59:42	-- scheduler/interrupt.sh@55 -- # collect_cpu_idle
00:21:54.304   10:59:42	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:21:54.304   10:59:42	-- scheduler/common.sh@628 -- # local time=5
00:21:54.304   10:59:42	-- scheduler/common.sh@629 -- # local cpu
00:21:54.304   10:59:42	-- scheduler/common.sh@630 -- # local samples
00:21:54.304   10:59:42	-- scheduler/common.sh@631 -- # is_idle=()
00:21:54.304   10:59:42	-- scheduler/common.sh@631 -- # local -g is_idle
00:21:54.304   10:59:42	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5
00:21:54.304  Collecting cpu idle stats (cpus: 4) for 5 seconds...
00:21:54.304   10:59:42	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 4
00:21:54.304   10:59:42	-- scheduler/common.sh@483 -- # xtrace_disable
00:21:54.304   10:59:42	-- common/autotest_common.sh@10 -- # set +x
00:22:00.868   10:59:48	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:00.868   10:59:48	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:00.868   10:59:48	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:00.868    10:59:48	-- scheduler/common.sh@641 -- # calc_median 38 0 0 0 0
00:22:00.868    10:59:48	-- scheduler/common.sh@727 -- # samples=('38' '0' '0' '0' '0')
00:22:00.868    10:59:48	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:00.868    10:59:48	-- scheduler/common.sh@728 -- # local middle median sample
00:22:00.868    10:59:48	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:00.868     10:59:48	-- scheduler/common.sh@730 -- # printf '%s\n' 38 0 0 0 0
00:22:00.868     10:59:48	-- scheduler/common.sh@730 -- # sort -n
00:22:00.868    10:59:48	-- scheduler/common.sh@732 -- # middle=2
00:22:00.868    10:59:48	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:00.868    10:59:48	-- scheduler/common.sh@736 -- # median=0
00:22:00.868    10:59:48	-- scheduler/common.sh@739 -- # echo 0
00:22:00.868   10:59:48	-- scheduler/common.sh@641 -- # load_median=0
00:22:00.868   10:59:48	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '38 0 0 0 0' 7 0
00:22:00.868  * cpu4 idle samples: 38 0 0 0 0 (avg: 7%, median: 0%)
00:22:00.868    10:59:48	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:22:00.868    10:59:48	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:22:00.868    10:59:48	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:00.868    10:59:48	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:22:00.868    10:59:48	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:22:00.868    10:59:48	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:00.868    10:59:48	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:00.868    10:59:48	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:00.868    10:59:48	-- scheduler/common.sh@690 -- # case "$time" in
00:22:00.868    10:59:48	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:00.868     10:59:48	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:00.868    10:59:48	-- scheduler/common.sh@697 -- # usage=101
00:22:00.868    10:59:48	-- scheduler/common.sh@698 -- # usage=100
00:22:00.868    10:59:48	-- scheduler/common.sh@700 -- # printf %u 100
00:22:00.868    10:59:48	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 100
00:22:00.868  * cpu4 user usage: 100
00:22:00.868    10:59:48	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '63433 63534 63634 63735 63836'
00:22:00.868  * cpu4 user samples: 63433 63534 63634 63735 63836
00:22:00.868    10:59:48	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:22:00.868  * cpu4 nice samples: 0 0 0 0 0
00:22:00.868    10:59:48	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '10917 10917 10917 10917 10917'
00:22:00.868  * cpu4 system samples: 10917 10917 10917 10917 10917
00:22:00.868   10:59:48	-- scheduler/common.sh@652 -- # user_load=100
00:22:00.868   10:59:48	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:00.868   10:59:48	-- scheduler/common.sh@656 -- # (( user_load <= 15 ))
00:22:00.868   10:59:48	-- scheduler/common.sh@660 -- # printf '* cpu%u is not idle\n' 4
00:22:00.868  * cpu4 is not idle
00:22:00.868   10:59:48	-- scheduler/common.sh@661 -- # is_idle[cpu]=0
00:22:00.868    10:59:48	-- scheduler/common.sh@666 -- # get_spdk_proc_time 5 4
00:22:00.868    10:59:48	-- scheduler/common.sh@747 -- # xtrace_disable
00:22:00.868    10:59:48	-- common/autotest_common.sh@10 -- # set +x
00:22:04.256  stime samples: 0 0 0 0
00:22:04.256  utime samples: 0 100 100 99
00:22:04.256   10:59:52	-- scheduler/common.sh@666 -- # user_spdk_load=99
00:22:04.256   10:59:52	-- scheduler/common.sh@667 -- # (( user_spdk_load <= 15 ))
00:22:04.256    10:59:52	-- scheduler/interrupt.sh@56 -- # rpc_cmd framework_get_reactors
00:22:04.256    10:59:52	-- scheduler/interrupt.sh@56 -- # jq -r '.reactors[]'
00:22:04.256    10:59:52	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:04.256    10:59:52	-- common/autotest_common.sh@10 -- # set +x
00:22:04.256    10:59:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:04.256   10:59:53	-- scheduler/interrupt.sh@56 -- # reactor_framework='{
00:22:04.256    "lcore": 1,
00:22:04.256    "busy": 3987667304,
00:22:04.256    "idle": 95998649854,
00:22:04.256    "in_interrupt": false,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": [
00:22:04.256      {
00:22:04.256        "name": "app_thread",
00:22:04.256        "id": 1,
00:22:04.256        "cpumask": "2",
00:22:04.256        "elapsed": 100006705240
00:22:04.256      }
00:22:04.256    ]
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 2,
00:22:04.256    "busy": 67402502490,
00:22:04.256    "idle": 4720093956,
00:22:04.256    "in_interrupt": false,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": [
00:22:04.256      {
00:22:04.256        "name": "thread2",
00:22:04.256        "id": 2,
00:22:04.256        "cpumask": "4",
00:22:04.256        "elapsed": 67388022280
00:22:04.256      }
00:22:04.256    ]
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 3,
00:22:04.256    "busy": 44398350420,
00:22:04.256    "idle": 4949812896,
00:22:04.256    "in_interrupt": false,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": [
00:22:04.256      {
00:22:04.256        "name": "thread3",
00:22:04.256        "id": 3,
00:22:04.256        "cpumask": "8",
00:22:04.256        "elapsed": 44383604318
00:22:04.256      }
00:22:04.256    ]
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 4,
00:22:04.256    "busy": 21623952288,
00:22:04.256    "idle": 4950460934,
00:22:04.256    "in_interrupt": false,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": [
00:22:04.256      {
00:22:04.256        "name": "thread4",
00:22:04.256        "id": 4,
00:22:04.256        "cpumask": "10",
00:22:04.256        "elapsed": 21379105332
00:22:04.256      }
00:22:04.256    ]
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 37,
00:22:04.256    "busy": 0,
00:22:04.256    "idle": 4031003094,
00:22:04.256    "in_interrupt": true,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": []
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 38,
00:22:04.256    "busy": 0,
00:22:04.256    "idle": 4031261042,
00:22:04.256    "in_interrupt": true,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": []
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 39,
00:22:04.256    "busy": 0,
00:22:04.256    "idle": 4031517102,
00:22:04.256    "in_interrupt": true,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": []
00:22:04.256  }
00:22:04.256  {
00:22:04.256    "lcore": 40,
00:22:04.256    "busy": 0,
00:22:04.256    "idle": 4031948184,
00:22:04.256    "in_interrupt": true,
00:22:04.256    "core_freq": 2300,
00:22:04.256    "lw_threads": []
00:22:04.256  }'
00:22:04.256    10:59:53	-- scheduler/interrupt.sh@57 -- # jq -r 'select(.lcore == 4) | .lw_threads[] | select(.name == "thread4")'
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@57 -- # [[ -n {
00:22:04.257    "name": "thread4",
00:22:04.257    "id": 4,
00:22:04.257    "cpumask": "10",
00:22:04.257    "elapsed": 21379105332
00:22:04.257  } ]]
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@58 -- # (( is_idle[cpu] == 0 ))
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@64 -- # active_thread 2 0
00:22:04.257   10:59:53	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 2 0
00:22:04.257   10:59:53	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:04.257   10:59:53	-- common/autotest_common.sh@10 -- # set +x
00:22:04.257   10:59:53	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:22:04.257   10:59:53	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:22:04.257   10:59:53	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:04.257   10:59:53	-- scheduler/common.sh@628 -- # local time=5
00:22:04.257   10:59:53	-- scheduler/common.sh@629 -- # local cpu
00:22:04.257   10:59:53	-- scheduler/common.sh@630 -- # local samples
00:22:04.257   10:59:53	-- scheduler/common.sh@631 -- # is_idle=()
00:22:04.257   10:59:53	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:04.257   10:59:53	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 2 5
00:22:04.257  Collecting cpu idle stats (cpus: 2) for 5 seconds...
00:22:04.257   10:59:53	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 2
00:22:04.257   10:59:53	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:04.257   10:59:53	-- common/autotest_common.sh@10 -- # set +x
00:22:10.822   10:59:59	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:10.822   10:59:59	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:10.822   10:59:59	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:10.822    10:59:59	-- scheduler/common.sh@641 -- # calc_median 0 0 81 100 100
00:22:10.822    10:59:59	-- scheduler/common.sh@727 -- # samples=('0' '0' '81' '100' '100')
00:22:10.822    10:59:59	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:10.822    10:59:59	-- scheduler/common.sh@728 -- # local middle median sample
00:22:10.822    10:59:59	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:10.822     10:59:59	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 81 100 100
00:22:10.822     10:59:59	-- scheduler/common.sh@730 -- # sort -n
00:22:10.822    10:59:59	-- scheduler/common.sh@732 -- # middle=2
00:22:10.822    10:59:59	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:10.822    10:59:59	-- scheduler/common.sh@736 -- # median=81
00:22:10.822    10:59:59	-- scheduler/common.sh@739 -- # echo 81
00:22:10.822   10:59:59	-- scheduler/common.sh@641 -- # load_median=81
00:22:10.822   10:59:59	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 2 '0 0 81 100 100' 56 81
00:22:10.822  * cpu2 idle samples: 0 0 81 100 100 (avg: 56%, median: 81%)
00:22:10.822    10:59:59	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 2 user
00:22:10.822    10:59:59	-- scheduler/common.sh@678 -- # local cpu=2 time=user
00:22:10.822    10:59:59	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:10.822    10:59:59	-- scheduler/common.sh@682 -- # [[ -v raw_samples_2 ]]
00:22:10.822    10:59:59	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_2
00:22:10.822    10:59:59	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:10.822    10:59:59	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:10.822    10:59:59	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:10.822    10:59:59	-- scheduler/common.sh@690 -- # case "$time" in
00:22:10.822    10:59:59	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:10.822     10:59:59	-- scheduler/common.sh@691 -- # trap - ERR
00:22:10.822     10:59:59	-- scheduler/common.sh@691 -- # print_backtrace
00:22:10.822     10:59:59	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:10.822     10:59:59	-- common/autotest_common.sh@1142 -- # return 0
00:22:10.822     10:59:59	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:10.822    10:59:59	-- scheduler/common.sh@697 -- # usage=0
00:22:10.822    10:59:59	-- scheduler/common.sh@698 -- # usage=0
00:22:10.822    10:59:59	-- scheduler/common.sh@700 -- # printf %u 0
00:22:10.822    10:59:59	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 2 user 0
00:22:10.822  * cpu2 user usage: 0
00:22:10.822    10:59:59	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 2 '161860 161961 161979 161979 161979'
00:22:10.822  * cpu2 user samples: 161860 161961 161979 161979 161979
00:22:10.822    10:59:59	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 2 '0 0 0 0 0'
00:22:10.822  * cpu2 nice samples: 0 0 0 0 0
00:22:10.822    10:59:59	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 2 '11970 11970 11971 11971 11971'
00:22:10.822  * cpu2 system samples: 11970 11970 11971 11971 11971
00:22:10.822   10:59:59	-- scheduler/common.sh@652 -- # user_load=0
00:22:10.822   10:59:59	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:10.822   10:59:59	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 2
00:22:10.822  * cpu2 is idle
00:22:10.822   10:59:59	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:10.822    10:59:59	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:22:10.822    10:59:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.822    10:59:59	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:22:10.822    10:59:59	-- common/autotest_common.sh@10 -- # set +x
00:22:10.822    10:59:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.822   10:59:59	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:22:10.822    "lcore": 1,
00:22:10.822    "busy": 4004706060,
00:22:10.822    "idle": 110406760788,
00:22:10.822    "in_interrupt": false,
00:22:10.822    "core_freq": 2300,
00:22:10.822    "lw_threads": [
00:22:10.822      {
00:22:10.822        "name": "app_thread",
00:22:10.822        "id": 1,
00:22:10.822        "cpumask": "2",
00:22:10.822        "elapsed": 114431834006
00:22:10.822      },
00:22:10.822      {
00:22:10.822        "name": "thread2",
00:22:10.822        "id": 2,
00:22:10.822        "cpumask": "4",
00:22:10.822        "elapsed": 11419225050
00:22:10.822      }
00:22:10.822    ]
00:22:10.822  }
00:22:10.822  {
00:22:10.822    "lcore": 2,
00:22:10.822    "busy": 67862909490,
00:22:10.822    "idle": 9781444854,
00:22:10.822    "in_interrupt": true,
00:22:10.822    "core_freq": 2300,
00:22:10.822    "lw_threads": []
00:22:10.822  }
00:22:10.822  {
00:22:10.822    "lcore": 3,
00:22:10.822    "busy": 58890992580,
00:22:10.822    "idle": 4949812896,
00:22:10.822    "in_interrupt": false,
00:22:10.822    "core_freq": 2300,
00:22:10.822    "lw_threads": [
00:22:10.822      {
00:22:10.822        "name": "thread3",
00:22:10.822        "id": 3,
00:22:10.822        "cpumask": "8",
00:22:10.822        "elapsed": 58808733084
00:22:10.822      }
00:22:10.822    ]
00:22:10.822  }
00:22:10.822  {
00:22:10.822    "lcore": 4,
00:22:10.822    "busy": 36116660926,
00:22:10.822    "idle": 4950460934,
00:22:10.822    "in_interrupt": false,
00:22:10.822    "core_freq": 2300,
00:22:10.822    "lw_threads": [
00:22:10.822      {
00:22:10.822        "name": "thread4",
00:22:10.822        "id": 4,
00:22:10.823        "cpumask": "10",
00:22:10.823        "elapsed": 35804234098
00:22:10.823      }
00:22:10.823    ]
00:22:10.823  }
00:22:10.823  {
00:22:10.823    "lcore": 37,
00:22:10.823    "busy": 0,
00:22:10.823    "idle": 4031003094,
00:22:10.823    "in_interrupt": true,
00:22:10.823    "core_freq": 2300,
00:22:10.823    "lw_threads": []
00:22:10.823  }
00:22:10.823  {
00:22:10.823    "lcore": 38,
00:22:10.823    "busy": 0,
00:22:10.823    "idle": 4031261042,
00:22:10.823    "in_interrupt": true,
00:22:10.823    "core_freq": 2300,
00:22:10.823    "lw_threads": []
00:22:10.823  }
00:22:10.823  {
00:22:10.823    "lcore": 39,
00:22:10.823    "busy": 0,
00:22:10.823    "idle": 4031517102,
00:22:10.823    "in_interrupt": true,
00:22:10.823    "core_freq": 2300,
00:22:10.823    "lw_threads": []
00:22:10.823  }
00:22:10.823  {
00:22:10.823    "lcore": 40,
00:22:10.823    "busy": 0,
00:22:10.823    "idle": 4031948184,
00:22:10.823    "in_interrupt": true,
00:22:10.823    "core_freq": 2300,
00:22:10.823    "lw_threads": []
00:22:10.823  }'
00:22:10.823    10:59:59	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 2) | .lw_threads[].id'
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:22:10.823    10:59:59	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread2")'
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:22:10.823    "name": "thread2",
00:22:10.823    "id": 2,
00:22:10.823    "cpumask": "4",
00:22:10.823    "elapsed": 11419225050
00:22:10.823  } ]]
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@64 -- # active_thread 3 0
00:22:10.823   10:59:59	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 3 0
00:22:10.823   10:59:59	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:10.823   10:59:59	-- common/autotest_common.sh@10 -- # set +x
00:22:10.823   10:59:59	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:22:10.823   10:59:59	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:22:10.823   10:59:59	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:10.823   10:59:59	-- scheduler/common.sh@628 -- # local time=5
00:22:10.823   10:59:59	-- scheduler/common.sh@629 -- # local cpu
00:22:10.823   10:59:59	-- scheduler/common.sh@630 -- # local samples
00:22:10.823   10:59:59	-- scheduler/common.sh@631 -- # is_idle=()
00:22:10.823   10:59:59	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:10.823   10:59:59	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 3 5
00:22:10.823  Collecting cpu idle stats (cpus: 3) for 5 seconds...
00:22:10.823   10:59:59	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 3
00:22:10.823   10:59:59	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:10.823   10:59:59	-- common/autotest_common.sh@10 -- # set +x
00:22:17.386   11:00:05	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:17.386   11:00:05	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:17.386   11:00:05	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:17.386    11:00:05	-- scheduler/common.sh@641 -- # calc_median 0 0 12 100 99
00:22:17.386    11:00:05	-- scheduler/common.sh@727 -- # samples=('0' '0' '12' '100' '99')
00:22:17.386    11:00:05	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:17.386    11:00:05	-- scheduler/common.sh@728 -- # local middle median sample
00:22:17.386    11:00:05	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:17.386     11:00:05	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 12 100 99
00:22:17.386     11:00:05	-- scheduler/common.sh@730 -- # sort -n
00:22:17.386    11:00:05	-- scheduler/common.sh@732 -- # middle=2
00:22:17.386    11:00:05	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:17.386    11:00:05	-- scheduler/common.sh@736 -- # median=12
00:22:17.386    11:00:05	-- scheduler/common.sh@739 -- # echo 12
00:22:17.386   11:00:05	-- scheduler/common.sh@641 -- # load_median=12
00:22:17.386   11:00:05	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 3 '0 0 12 100 99' 42 12
00:22:17.386  * cpu3 idle samples: 0 0 12 100 99 (avg: 42%, median: 12%)
00:22:17.386    11:00:05	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 3 user
00:22:17.386    11:00:05	-- scheduler/common.sh@678 -- # local cpu=3 time=user
00:22:17.386    11:00:05	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:17.386    11:00:05	-- scheduler/common.sh@682 -- # [[ -v raw_samples_3 ]]
00:22:17.386    11:00:05	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_3
00:22:17.386    11:00:05	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:17.386    11:00:05	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:17.386    11:00:05	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:17.386    11:00:05	-- scheduler/common.sh@690 -- # case "$time" in
00:22:17.386    11:00:05	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:17.386     11:00:05	-- scheduler/common.sh@691 -- # trap - ERR
00:22:17.386     11:00:05	-- scheduler/common.sh@691 -- # print_backtrace
00:22:17.386     11:00:05	-- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]]
00:22:17.386     11:00:05	-- common/autotest_common.sh@1142 -- # return 0
00:22:17.386     11:00:05	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:17.386    11:00:05	-- scheduler/common.sh@697 -- # usage=0
00:22:17.386    11:00:05	-- scheduler/common.sh@698 -- # usage=0
00:22:17.386    11:00:05	-- scheduler/common.sh@700 -- # printf %u 0
00:22:17.386    11:00:05	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 3 user 0
00:22:17.386  * cpu3 user usage: 0
00:22:17.386    11:00:05	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 3 '134895 134995 135083 135083 135083'
00:22:17.386  * cpu3 user samples: 134895 134995 135083 135083 135083
00:22:17.386    11:00:05	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 3 '0 0 0 0 0'
00:22:17.386  * cpu3 nice samples: 0 0 0 0 0
00:22:17.386    11:00:05	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 3 '10410 10410 10410 10410 10410'
00:22:17.386  * cpu3 system samples: 10410 10410 10410 10410 10410
00:22:17.386   11:00:05	-- scheduler/common.sh@652 -- # user_load=0
00:22:17.386   11:00:05	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:17.386   11:00:05	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 3
00:22:17.386  * cpu3 is idle
00:22:17.386   11:00:05	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:17.386    11:00:05	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:22:17.386    11:00:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:17.386    11:00:05	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:22:17.386    11:00:05	-- common/autotest_common.sh@10 -- # set +x
00:22:17.386    11:00:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:22:17.386    "lcore": 1,
00:22:17.386    "busy": 4023155380,
00:22:17.386    "idle": 124881034032,
00:22:17.386    "in_interrupt": false,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": [
00:22:17.386      {
00:22:17.386        "name": "app_thread",
00:22:17.386        "id": 1,
00:22:17.386        "cpumask": "2",
00:22:17.386        "elapsed": 128924527432
00:22:17.386      },
00:22:17.386      {
00:22:17.386        "name": "thread2",
00:22:17.386        "id": 2,
00:22:17.386        "cpumask": "4",
00:22:17.386        "elapsed": 25911918476
00:22:17.386      },
00:22:17.386      {
00:22:17.386        "name": "thread3",
00:22:17.386        "id": 3,
00:22:17.386        "cpumask": "8",
00:22:17.386        "elapsed": 9808899046
00:22:17.386      }
00:22:17.386    ]
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 2,
00:22:17.386    "busy": 67862909490,
00:22:17.386    "idle": 9781444854,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 3,
00:22:17.386    "busy": 59351371256,
00:22:17.386    "idle": 11621378740,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 4,
00:22:17.386    "busy": 50379289928,
00:22:17.386    "idle": 4950460934,
00:22:17.386    "in_interrupt": false,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": [
00:22:17.386      {
00:22:17.386        "name": "thread4",
00:22:17.386        "id": 4,
00:22:17.386        "cpumask": "10",
00:22:17.386        "elapsed": 50296927524
00:22:17.386      }
00:22:17.386    ]
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 37,
00:22:17.386    "busy": 0,
00:22:17.386    "idle": 4031003094,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 38,
00:22:17.386    "busy": 0,
00:22:17.386    "idle": 4031261042,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 39,
00:22:17.386    "busy": 0,
00:22:17.386    "idle": 4031517102,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }
00:22:17.386  {
00:22:17.386    "lcore": 40,
00:22:17.386    "busy": 0,
00:22:17.386    "idle": 4031948184,
00:22:17.386    "in_interrupt": true,
00:22:17.386    "core_freq": 2300,
00:22:17.386    "lw_threads": []
00:22:17.386  }'
00:22:17.386    11:00:05	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 3) | .lw_threads[].id'
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:22:17.386    11:00:05	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread3")'
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:22:17.386    "name": "thread3",
00:22:17.386    "id": 3,
00:22:17.386    "cpumask": "8",
00:22:17.386    "elapsed": 9808899046
00:22:17.386  } ]]
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@63 -- # for cpu in "${!threads[@]}"
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@64 -- # active_thread 4 0
00:22:17.386   11:00:05	-- scheduler/common.sh@479 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 4 0
00:22:17.386   11:00:05	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:17.386   11:00:05	-- common/autotest_common.sh@10 -- # set +x
00:22:17.386   11:00:05	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@65 -- # cpus_to_collect=("$cpu")
00:22:17.386   11:00:05	-- scheduler/interrupt.sh@66 -- # collect_cpu_idle
00:22:17.386   11:00:05	-- scheduler/common.sh@626 -- # (( 1 > 0 ))
00:22:17.386   11:00:05	-- scheduler/common.sh@628 -- # local time=5
00:22:17.386   11:00:05	-- scheduler/common.sh@629 -- # local cpu
00:22:17.386   11:00:05	-- scheduler/common.sh@630 -- # local samples
00:22:17.386   11:00:05	-- scheduler/common.sh@631 -- # is_idle=()
00:22:17.386   11:00:05	-- scheduler/common.sh@631 -- # local -g is_idle
00:22:17.386   11:00:05	-- scheduler/common.sh@633 -- # printf 'Collecting cpu idle stats (cpus: %s) for %u seconds...\n' 4 5
00:22:17.386  Collecting cpu idle stats (cpus: 4) for 5 seconds...
00:22:17.386   11:00:05	-- scheduler/common.sh@636 -- # get_cpu_time 5 idle 0 1 4
00:22:17.386   11:00:05	-- scheduler/common.sh@483 -- # xtrace_disable
00:22:17.386   11:00:05	-- common/autotest_common.sh@10 -- # set +x
00:22:23.946   11:00:11	-- scheduler/common.sh@638 -- # local user_load load_median user_spdk_load
00:22:23.946   11:00:11	-- scheduler/common.sh@639 -- # for cpu in "${cpus_to_collect[@]}"
00:22:23.946   11:00:11	-- scheduler/common.sh@640 -- # samples=(${cpu_times[cpu]})
00:22:23.946    11:00:11	-- scheduler/common.sh@641 -- # calc_median 0 0 41 100 96
00:22:23.946    11:00:11	-- scheduler/common.sh@727 -- # samples=('0' '0' '41' '100' '96')
00:22:23.946    11:00:11	-- scheduler/common.sh@727 -- # local samples samples_sorted
00:22:23.946    11:00:11	-- scheduler/common.sh@728 -- # local middle median sample
00:22:23.946    11:00:11	-- scheduler/common.sh@730 -- # samples_sorted=($(printf '%s\n' "${samples[@]}" | sort -n))
00:22:23.946     11:00:11	-- scheduler/common.sh@730 -- # printf '%s\n' 0 0 41 100 96
00:22:23.946     11:00:11	-- scheduler/common.sh@730 -- # sort -n
00:22:23.946    11:00:11	-- scheduler/common.sh@732 -- # middle=2
00:22:23.946    11:00:11	-- scheduler/common.sh@733 -- # (( 5 % 2 == 0 ))
00:22:23.946    11:00:11	-- scheduler/common.sh@736 -- # median=41
00:22:23.946    11:00:11	-- scheduler/common.sh@739 -- # echo 41
00:22:23.946   11:00:11	-- scheduler/common.sh@641 -- # load_median=41
00:22:23.946   11:00:11	-- scheduler/common.sh@642 -- # printf '* cpu%u idle samples: %s (avg: %u%%, median: %u%%)\n' 4 '0 0 41 100 96' 47 41
00:22:23.946  * cpu4 idle samples: 0 0 41 100 96 (avg: 47%, median: 41%)
00:22:23.946    11:00:11	-- scheduler/common.sh@652 -- # cpu_usage_clk_tck 4 user
00:22:23.946    11:00:11	-- scheduler/common.sh@678 -- # local cpu=4 time=user
00:22:23.946    11:00:11	-- scheduler/common.sh@679 -- # local user nice system usage clk_delta
00:22:23.946    11:00:11	-- scheduler/common.sh@682 -- # [[ -v raw_samples_4 ]]
00:22:23.946    11:00:11	-- scheduler/common.sh@684 -- # local -n raw_samples=raw_samples_4
00:22:23.946    11:00:11	-- scheduler/common.sh@685 -- # user=("${!raw_samples[cpu_time_map["user"]]}")
00:22:23.946    11:00:11	-- scheduler/common.sh@686 -- # nice=("${!raw_samples[cpu_time_map["nice"]]}")
00:22:23.946    11:00:11	-- scheduler/common.sh@687 -- # system=("${!raw_samples[cpu_time_map["system"]]}")
00:22:23.946    11:00:11	-- scheduler/common.sh@690 -- # case "$time" in
00:22:23.946    11:00:11	-- scheduler/common.sh@691 -- # (( clk_delta += (user[-1] - user[-2]) ))
00:22:23.946     11:00:11	-- scheduler/common.sh@697 -- # getconf CLK_TCK
00:22:23.946    11:00:11	-- scheduler/common.sh@697 -- # usage=1
00:22:23.946    11:00:11	-- scheduler/common.sh@698 -- # usage=1
00:22:23.946    11:00:11	-- scheduler/common.sh@700 -- # printf %u 1
00:22:23.946    11:00:11	-- scheduler/common.sh@701 -- # printf '* cpu%u %s usage: %u\n' 4 user 1
00:22:23.946  * cpu4 user usage: 1
00:22:23.946    11:00:11	-- scheduler/common.sh@702 -- # printf '* cpu%u user samples: %s\n' 4 '65712 65813 65870 65870 65871'
00:22:23.946  * cpu4 user samples: 65712 65813 65870 65870 65871
00:22:23.946    11:00:11	-- scheduler/common.sh@703 -- # printf '* cpu%u nice samples: %s\n' 4 '0 0 0 0 0'
00:22:23.946  * cpu4 nice samples: 0 0 0 0 0
00:22:23.946    11:00:11	-- scheduler/common.sh@704 -- # printf '* cpu%u system samples: %s\n' 4 '10917 10917 10918 10918 10921'
00:22:23.946  * cpu4 system samples: 10917 10917 10918 10918 10921
00:22:23.946   11:00:11	-- scheduler/common.sh@652 -- # user_load=1
00:22:23.946   11:00:11	-- scheduler/common.sh@653 -- # (( samples[-1] >= 70 ))
00:22:23.946   11:00:11	-- scheduler/common.sh@654 -- # printf '* cpu%u is idle\n' 4
00:22:23.946  * cpu4 is idle
00:22:23.946   11:00:11	-- scheduler/common.sh@655 -- # is_idle[cpu]=1
00:22:23.946    11:00:11	-- scheduler/interrupt.sh@67 -- # rpc_cmd framework_get_reactors
00:22:23.946    11:00:11	-- scheduler/interrupt.sh@67 -- # jq -r '.reactors[]'
00:22:23.946    11:00:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:23.946    11:00:11	-- common/autotest_common.sh@10 -- # set +x
00:22:23.946    11:00:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@67 -- # reactor_framework='{
00:22:23.946    "lcore": 1,
00:22:23.946    "busy": 4046215860,
00:22:23.946    "idle": 139101949790,
00:22:23.946    "in_interrupt": false,
00:22:23.946    "core_freq": 1900,
00:22:23.946    "lw_threads": [
00:22:23.946      {
00:22:23.946        "name": "app_thread",
00:22:23.946        "id": 1,
00:22:23.946        "cpumask": "2",
00:22:23.946        "elapsed": 143168464694
00:22:23.946      },
00:22:23.946      {
00:22:23.946        "name": "thread2",
00:22:23.946        "id": 2,
00:22:23.946        "cpumask": "4",
00:22:23.946        "elapsed": 40155855738
00:22:23.946      },
00:22:23.946      {
00:22:23.946        "name": "thread3",
00:22:23.946        "id": 3,
00:22:23.946        "cpumask": "8",
00:22:23.946        "elapsed": 24052836308
00:22:23.946      },
00:22:23.946      {
00:22:23.946        "name": "thread4",
00:22:23.946        "id": 4,
00:22:23.946        "cpumask": "10",
00:22:23.946        "elapsed": 10269168218
00:22:23.946      }
00:22:23.946    ]
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 2,
00:22:23.946    "busy": 67862909490,
00:22:23.946    "idle": 9781444854,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 3,
00:22:23.946    "busy": 59351371256,
00:22:23.946    "idle": 11621378740,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 4,
00:22:23.946    "busy": 50609628392,
00:22:23.946    "idle": 10912948360,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 37,
00:22:23.946    "busy": 0,
00:22:23.946    "idle": 4031003094,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 38,
00:22:23.946    "busy": 0,
00:22:23.946    "idle": 4031261042,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 39,
00:22:23.946    "busy": 0,
00:22:23.946    "idle": 4031517102,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }
00:22:23.946  {
00:22:23.946    "lcore": 40,
00:22:23.946    "busy": 0,
00:22:23.946    "idle": 4031948184,
00:22:23.946    "in_interrupt": true,
00:22:23.946    "core_freq": 2300,
00:22:23.946    "lw_threads": []
00:22:23.946  }'
00:22:23.946    11:00:11	-- scheduler/interrupt.sh@68 -- # jq -r 'select(.lcore == 4) | .lw_threads[].id'
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@68 -- # [[ -z '' ]]
00:22:23.946    11:00:11	-- scheduler/interrupt.sh@69 -- # jq -r 'select(.lcore == 1) | .lw_threads[] | select(.name == "thread4")'
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@69 -- # [[ -n {
00:22:23.946    "name": "thread4",
00:22:23.946    "id": 4,
00:22:23.946    "cpumask": "10",
00:22:23.946    "elapsed": 10269168218
00:22:23.946  } ]]
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@70 -- # (( is_idle[cpu] == 1 ))
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@74 -- # destroy_thread 2
00:22:23.946   11:00:11	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 2
00:22:23.946   11:00:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:23.946   11:00:11	-- common/autotest_common.sh@10 -- # set +x
00:22:23.946   11:00:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@74 -- # destroy_thread 3
00:22:23.946   11:00:11	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 3
00:22:23.946   11:00:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:23.946   11:00:11	-- common/autotest_common.sh@10 -- # set +x
00:22:23.946   11:00:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@73 -- # for cpu in "${!threads[@]}"
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@74 -- # destroy_thread 4
00:22:23.946   11:00:11	-- scheduler/common.sh@475 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 4
00:22:23.946   11:00:11	-- common/autotest_common.sh@561 -- # xtrace_disable
00:22:23.946   11:00:11	-- common/autotest_common.sh@10 -- # set +x
00:22:23.946   11:00:11	-- common/autotest_common.sh@589 -- # [[ 0 == 0 ]]
00:22:23.946   11:00:11	-- scheduler/interrupt.sh@1 -- # killprocess 2221244
00:22:23.946   11:00:11	-- common/autotest_common.sh@936 -- # '[' -z 2221244 ']'
00:22:23.946   11:00:11	-- common/autotest_common.sh@940 -- # kill -0 2221244
00:22:23.946    11:00:11	-- common/autotest_common.sh@941 -- # uname
00:22:23.947   11:00:11	-- common/autotest_common.sh@941 -- # '[' Linux = Linux ']'
00:22:23.947    11:00:11	-- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2221244
00:22:23.947   11:00:11	-- common/autotest_common.sh@942 -- # process_name=reactor_1
00:22:23.947   11:00:11	-- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']'
00:22:23.947   11:00:11	-- common/autotest_common.sh@954 -- # echo 'killing process with pid 2221244'
00:22:23.947  killing process with pid 2221244
00:22:23.947   11:00:11	-- common/autotest_common.sh@955 -- # kill 2221244
00:22:23.947  [2024-12-15 11:00:11.939842] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:22:23.947   11:00:11	-- common/autotest_common.sh@960 -- # wait 2221244
00:22:23.947  POWER: Power management governor of lcore 1 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 2 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 3 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 4 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 4 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 37 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 37 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 38 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 38 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 39 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 39 has exited from 'performance' mode and been set back to the original
00:22:23.947  POWER: Power management governor of lcore 40 has been set to 'powersave' successfully
00:22:23.947  POWER: Power management of lcore 40 has exited from 'performance' mode and been set back to the original
00:22:23.947  
00:22:23.947  real	1m3.334s
00:22:23.947  user	2m42.520s
00:22:23.947  sys	0m1.168s
00:22:23.947   11:00:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:23.947   11:00:12	-- common/autotest_common.sh@10 -- # set +x
00:22:23.947  ************************************
00:22:23.947  END TEST interrupt_mode
00:22:23.947  ************************************
00:22:23.947   11:00:12	-- scheduler/scheduler.sh@1 -- # restore_cgroups
00:22:23.947   11:00:12	-- scheduler/isolate_cores.sh@11 -- # xtrace_disable
00:22:23.947   11:00:12	-- common/autotest_common.sh@10 -- # set +x
00:22:23.947  Moving 2212292 (PF_SUPERPRIV,PF_RANDOMIZE) to / from /cpuset
00:22:23.947  Moved 1 processes, failed 0
00:22:23.947  
00:22:23.947  real	1m42.185s
00:22:23.947  user	4m12.467s
00:22:23.947  sys	0m10.001s
00:22:23.947   11:00:12	-- common/autotest_common.sh@1115 -- # xtrace_disable
00:22:23.947   11:00:12	-- common/autotest_common.sh@10 -- # set +x
00:22:23.947  ************************************
00:22:23.947  END TEST scheduler
00:22:23.947  ************************************
00:22:23.947   11:00:12	-- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]]
00:22:23.947   11:00:12	-- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]]
00:22:23.947   11:00:12	-- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]]
00:22:23.947   11:00:12	-- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:22:23.947   11:00:12	-- spdk/autotest.sh@372 -- # timing_enter post_cleanup
00:22:23.947   11:00:12	-- common/autotest_common.sh@722 -- # xtrace_disable
00:22:23.947   11:00:12	-- common/autotest_common.sh@10 -- # set +x
00:22:23.947   11:00:12	-- spdk/autotest.sh@373 -- # autotest_cleanup
00:22:23.947   11:00:12	-- common/autotest_common.sh@1381 -- # local autotest_es=0
00:22:23.947   11:00:12	-- common/autotest_common.sh@1382 -- # xtrace_disable
00:22:23.947   11:00:12	-- common/autotest_common.sh@10 -- # set +x
00:22:28.142  INFO: APP EXITING
00:22:28.142  INFO: killing all VMs
00:22:28.142  INFO: killing vhost app
00:22:28.142  INFO: EXIT DONE
00:22:31.436  Waiting for block devices as requested
00:22:31.436  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:22:31.436  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:22:31.695  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:22:31.695  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:22:31.695  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:22:31.955  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:22:31.955  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:22:31.955  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:22:32.215  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:22:32.215  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:22:32.215  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:22:32.474  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:22:32.475  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:22:32.475  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:22:32.735  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:22:32.735  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:22:32.735  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:22:36.028  Cleaning
00:22:36.028  Removing:    /var/run/dpdk/spdk0/config
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3
00:22:36.028  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:22:36.028  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:22:36.028  Removing:    /dev/shm/bdevperf_trace.pid2203955
00:22:36.028  Removing:    /dev/shm/spdk_tgt_trace.pid2078948
00:22:36.028  Removing:    /var/run/dpdk/spdk0
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2076372
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2077559
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2078948
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2079739
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2079995
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2080402
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2080774
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2081083
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2081280
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2081479
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2081712
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2082481
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2085216
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2085598
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2085974
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2086007
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2086928
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2086964
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2087770
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2087875
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2088256
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2088440
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2088653
00:22:36.028  Removing:    /var/run/dpdk/spdk_pid2088835
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2089306
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2089501
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2089819
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2090163
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2090278
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2090355
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2090910
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2091270
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2091449
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2091647
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2091831
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2092032
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2092220
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2092415
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2092601
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2092850
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2093086
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2093342
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2093528
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2093722
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2093905
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2094104
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2094287
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2094486
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2094675
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2094961
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2095214
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2095413
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2095596
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2095791
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2095985
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2096182
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2096369
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2096581
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2096838
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2097108
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2097294
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2097491
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2097683
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2097880
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2098071
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2098268
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2098484
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2098764
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2098995
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2099199
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2099440
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2099706
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2100198
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2101434
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2102398
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2105056
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2106851
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2108482
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2109577
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2109662
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2109840
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2114002
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2114934
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2118458
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2120214
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2121883
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2122981
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2123157
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2123193
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2136410
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2137902
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2138790
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2139699
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2142855
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2148869
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2153059
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2159657
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2165129
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2171974
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2173209
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2181130
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2194697
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2194993
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2198265
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2201558
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2202288
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2203181
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2203955
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2204394
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2205688
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2207022
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2207726
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2208472
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2208839
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2209137
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2213531
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2217031
00:22:36.029  Removing:    /var/run/dpdk/spdk_pid2221244
00:22:36.029  Clean
00:22:36.288  killing process with pid 2033682
00:22:42.867  killing process with pid 2033679
00:22:42.867  killing process with pid 2033681
00:22:42.867  killing process with pid 2033680
00:22:42.867   11:00:31	-- common/autotest_common.sh@1446 -- # return 0
00:22:42.867   11:00:31	-- spdk/autotest.sh@374 -- # timing_exit post_cleanup
00:22:42.867   11:00:31	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:42.867   11:00:31	-- common/autotest_common.sh@10 -- # set +x
00:22:42.867   11:00:31	-- spdk/autotest.sh@376 -- # timing_exit autotest
00:22:42.867   11:00:31	-- common/autotest_common.sh@728 -- # xtrace_disable
00:22:42.867   11:00:31	-- common/autotest_common.sh@10 -- # set +x
00:22:42.867   11:00:31	-- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt
00:22:42.867   11:00:31	-- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log ]]
00:22:42.867   11:00:31	-- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/udev.log
00:22:42.867   11:00:31	-- spdk/autotest.sh@381 -- # [[ y == y ]]
00:22:42.867    11:00:31	-- spdk/autotest.sh@383 -- # hostname
00:22:42.867   11:00:31	-- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvme-phy-autotest/spdk -t spdk-wfp-45 -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info
00:22:43.126  geninfo: WARNING: invalid characters removed from testname!
00:23:05.068   11:00:52	-- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:07.605   11:00:56	-- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:10.142   11:00:59	-- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:12.680   11:01:01	-- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:15.217   11:01:04	-- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:17.756   11:01:06	-- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/cov_total.info
00:23:20.293   11:01:09	-- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:23:20.553     11:01:09	-- common/autotest_common.sh@1689 -- $ [[ y == y ]]
00:23:20.553      11:01:09	-- common/autotest_common.sh@1690 -- $ lcov --version
00:23:20.553      11:01:09	-- common/autotest_common.sh@1690 -- $ awk '{print $NF}'
00:23:20.553     11:01:09	-- common/autotest_common.sh@1690 -- $ lt 1.15 2
00:23:20.553     11:01:09	-- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2
00:23:20.553     11:01:09	-- scripts/common.sh@332 -- $ local ver1 ver1_l
00:23:20.553     11:01:09	-- scripts/common.sh@333 -- $ local ver2 ver2_l
00:23:20.553     11:01:09	-- scripts/common.sh@335 -- $ IFS=.-:
00:23:20.553     11:01:09	-- scripts/common.sh@335 -- $ read -ra ver1
00:23:20.553     11:01:09	-- scripts/common.sh@336 -- $ IFS=.-:
00:23:20.553     11:01:09	-- scripts/common.sh@336 -- $ read -ra ver2
00:23:20.553     11:01:09	-- scripts/common.sh@337 -- $ local 'op=<'
00:23:20.553     11:01:09	-- scripts/common.sh@339 -- $ ver1_l=2
00:23:20.553     11:01:09	-- scripts/common.sh@340 -- $ ver2_l=1
00:23:20.553     11:01:09	-- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v
00:23:20.553     11:01:09	-- scripts/common.sh@343 -- $ case "$op" in
00:23:20.553     11:01:09	-- scripts/common.sh@344 -- $ : 1
00:23:20.553     11:01:09	-- scripts/common.sh@363 -- $ (( v = 0 ))
00:23:20.553     11:01:09	-- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:23:20.553      11:01:09	-- scripts/common.sh@364 -- $ decimal 1
00:23:20.553      11:01:09	-- scripts/common.sh@352 -- $ local d=1
00:23:20.553      11:01:09	-- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]]
00:23:20.553      11:01:09	-- scripts/common.sh@354 -- $ echo 1
00:23:20.553     11:01:09	-- scripts/common.sh@364 -- $ ver1[v]=1
00:23:20.553      11:01:09	-- scripts/common.sh@365 -- $ decimal 2
00:23:20.553      11:01:09	-- scripts/common.sh@352 -- $ local d=2
00:23:20.553      11:01:09	-- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]]
00:23:20.553      11:01:09	-- scripts/common.sh@354 -- $ echo 2
00:23:20.553     11:01:09	-- scripts/common.sh@365 -- $ ver2[v]=2
00:23:20.553     11:01:09	-- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] ))
00:23:20.553     11:01:09	-- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] ))
00:23:20.553     11:01:09	-- scripts/common.sh@367 -- $ return 0
00:23:20.553     11:01:09	-- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:23:20.553     11:01:09	-- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS=
00:23:20.553  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:20.553  		--rc genhtml_branch_coverage=1
00:23:20.553  		--rc genhtml_function_coverage=1
00:23:20.553  		--rc genhtml_legend=1
00:23:20.553  		--rc geninfo_all_blocks=1
00:23:20.553  		--rc geninfo_unexecuted_blocks=1
00:23:20.553  		
00:23:20.553  		'
00:23:20.553     11:01:09	-- common/autotest_common.sh@1703 -- $ LCOV_OPTS='
00:23:20.553  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:20.553  		--rc genhtml_branch_coverage=1
00:23:20.553  		--rc genhtml_function_coverage=1
00:23:20.553  		--rc genhtml_legend=1
00:23:20.553  		--rc geninfo_all_blocks=1
00:23:20.553  		--rc geninfo_unexecuted_blocks=1
00:23:20.553  		
00:23:20.553  		'
00:23:20.553     11:01:09	-- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 
00:23:20.553  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:20.553  		--rc genhtml_branch_coverage=1
00:23:20.553  		--rc genhtml_function_coverage=1
00:23:20.553  		--rc genhtml_legend=1
00:23:20.553  		--rc geninfo_all_blocks=1
00:23:20.553  		--rc geninfo_unexecuted_blocks=1
00:23:20.553  		
00:23:20.553  		'
00:23:20.553     11:01:09	-- common/autotest_common.sh@1704 -- $ LCOV='lcov 
00:23:20.553  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:23:20.553  		--rc genhtml_branch_coverage=1
00:23:20.553  		--rc genhtml_function_coverage=1
00:23:20.553  		--rc genhtml_legend=1
00:23:20.553  		--rc geninfo_all_blocks=1
00:23:20.553  		--rc geninfo_unexecuted_blocks=1
00:23:20.553  		
00:23:20.553  		'
00:23:20.553    11:01:09	-- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvme-phy-autotest/spdk/scripts/common.sh
00:23:20.553     11:01:09	-- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]]
00:23:20.553     11:01:09	-- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:23:20.553     11:01:09	-- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:23:20.553      11:01:09	-- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:20.553      11:01:09	-- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:20.553      11:01:09	-- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:20.553      11:01:09	-- paths/export.sh@5 -- $ export PATH
00:23:20.553      11:01:09	-- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:23:20.553    11:01:09	-- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvme-phy-autotest/spdk/../output
00:23:20.553      11:01:09	-- common/autobuild_common.sh@440 -- $ date +%s
00:23:20.553     11:01:09	-- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1734256869.XXXXXX
00:23:20.553    11:01:09	-- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1734256869.czpKYV
00:23:20.553    11:01:09	-- common/autobuild_common.sh@442 -- $ [[ -n '' ]]
00:23:20.553    11:01:09	-- common/autobuild_common.sh@446 -- $ '[' -n '' ']'
00:23:20.553    11:01:09	-- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/'
00:23:20.553    11:01:09	-- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp'
00:23:20.553    11:01:09	-- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvme-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:23:20.553     11:01:09	-- common/autobuild_common.sh@456 -- $ get_config_params
00:23:20.553     11:01:09	-- common/autotest_common.sh@397 -- $ xtrace_disable
00:23:20.553     11:01:09	-- common/autotest_common.sh@10 -- $ set +x
00:23:20.553    11:01:09	-- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-ocf --enable-ubsan --enable-coverage --with-ublk'
00:23:20.553   11:01:09	-- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j72
00:23:20.553   11:01:09	-- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvme-phy-autotest/spdk
00:23:20.553   11:01:09	-- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]]
00:23:20.553   11:01:09	-- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]]
00:23:20.554   11:01:09	-- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]]
00:23:20.554   11:01:09	-- spdk/autopackage.sh@19 -- $ timing_finish
00:23:20.554   11:01:09	-- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:23:20.554   11:01:09	-- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']'
00:23:20.554   11:01:09	-- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvme-phy-autotest/spdk/../output/timing.txt
00:23:20.554   11:01:09	-- spdk/autopackage.sh@20 -- $ exit 0
00:23:20.554  + [[ -n 1980308 ]]
00:23:20.554  + sudo kill 1980308
00:23:20.564  [Pipeline] }
00:23:20.578  [Pipeline] // stage
00:23:20.583  [Pipeline] }
00:23:20.598  [Pipeline] // timeout
00:23:20.603  [Pipeline] }
00:23:20.618  [Pipeline] // catchError
00:23:20.622  [Pipeline] }
00:23:20.634  [Pipeline] // wrap
00:23:20.639  [Pipeline] }
00:23:20.651  [Pipeline] // catchError
00:23:20.659  [Pipeline] stage
00:23:20.661  [Pipeline] { (Epilogue)
00:23:20.672  [Pipeline] catchError
00:23:20.674  [Pipeline] {
00:23:20.684  [Pipeline] echo
00:23:20.685  Cleanup processes
00:23:20.690  [Pipeline] sh
00:23:20.973  + sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:23:20.973  2238783 sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:23:20.987  [Pipeline] sh
00:23:21.273  ++ sudo pgrep -af /var/jenkins/workspace/nvme-phy-autotest/spdk
00:23:21.273  ++ grep -v 'sudo pgrep'
00:23:21.273  ++ awk '{print $1}'
00:23:21.273  + sudo kill -9
00:23:21.273  + true
00:23:21.285  [Pipeline] sh
00:23:21.571  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:23:31.571  [Pipeline] sh
00:23:31.972  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:23:31.973  Artifacts sizes are good
00:23:32.027  [Pipeline] archiveArtifacts
00:23:32.035  Archiving artifacts
00:23:32.135  [Pipeline] sh
00:23:32.422  + sudo chown -R sys_sgci: /var/jenkins/workspace/nvme-phy-autotest
00:23:32.437  [Pipeline] cleanWs
00:23:32.447  [WS-CLEANUP] Deleting project workspace...
00:23:32.447  [WS-CLEANUP] Deferred wipeout is used...
00:23:32.454  [WS-CLEANUP] done
00:23:32.455  [Pipeline] }
00:23:32.470  [Pipeline] // catchError
00:23:32.481  [Pipeline] sh
00:23:32.765  + logger -p user.info -t JENKINS-CI
00:23:32.775  [Pipeline] }
00:23:32.786  [Pipeline] // stage
00:23:32.791  [Pipeline] }
00:23:32.804  [Pipeline] // node
00:23:32.808  [Pipeline] End of Pipeline
00:23:32.867  Finished: SUCCESS